Sunday, 27 November 2011

Managing expectations

Have you ever been out for a few drinks with friends. Maybe you’ve had more to drink than usual. What happens next? Well the answer seems to depend on which country you and the people you’re drinking with come from.

It seems that in some countries, people take the view that alcohol is so strong and people are so weak that anything is permissible. You can stand up in court and explain your actions – whatever they may be – by saying that you’d drunk too much. In other countries – like Italy – alcohol is grouped with food in the minds of people. You drink when you eat. You eat and drink with your friends and family. Using the defence of excessive alcohol would seem as absurd as using the defence of having eaten too many burgers to explain antisocial behaviour.

And it’s exactly the same with users. If they expect nanosecond response times to a CICS transaction they will be miffed when a response takes a second or two. Whereas, if they are used to a response taking a few seconds, they will be pleased when it takes less than two seconds for their screen to refresh.

Managing expectations can be the difference between happy users and unhappy users. In the same way it can be the difference between alcoholic destruction of everything on the way home and a great night out.

Banks seem to use the opposite technique. They pretend that they offer great service, but as every customer knows, they don’t. The news is always full of demands that the banks should lone more – particularly to small businesses. Speaking as the owner of a small business, I think this is not the real problem. I think the problem for most small businesses is the fact that banks charge too much for their services.

Now I don’t mind banks charging for the work they do – that’s the same model I use to stay in business! What I object to is the amount they charge. And I think this is part of the problem most small businesses face. For example, here in the UK, I get a lot of dollar cheques from the USA. I get an exchange rate that’s clearly in the bank’s favour and then I get charged for paying the money into my account. I get charged for paying in UK cheques. And I get charged even more for paying in cash!

So I guess my expectations are that banks are going to rip me off. They do nothing to manage that and make things better. And they really are the reason that a lot of small businesses are having a hard time during this recession – or whatever we’re calling it.

Just revisiting the psychology again. There are experiments where two groups of students were given free drinks all evening. Both groups got equally drunk. Then the experimenters explained that one group had drunk alcohol and the other group hadn’t. Once this second group were told they hadn’t had any alcohol, they immediately sobered up. Their expectations changed completely and they now behaved in a different way.

So, while IT strives to offer the best service to its users. It’s important that conversations take place between the two groups so that users can describe their expectations of the service they want to receive, and IT can explain how the service is being delivered and give a realistic idea of what an end user shoould expect. Most sites have SLAs (Service Level Agreements), but these tend to be gathering dust somewhere rather than being constantly referred to. The importance of the conversation is to manage expectations and make sure both groups can continue to work, happy in the knowledge that they are getting or delivering the level of service that everyone expects.

Don’t forget that on Thursday 1 December there’s a webinar entitled: “How Important is the Continuous Availability of Your Critical Applications?” at 2pm GMT. You can register for the event at https://www1.gotomeeting.com/register/844029904.

And this is the last week that you can complete the Arcati Mainframe Yearbook user survey at http://www.arcati.com/usersurvey12. We need all the completed surveys by Friday evening.

Saturday, 19 November 2011

Continuous availability – no longer a dream?

Zero downtime is a goal that many companies are striving for. It sounds so straighforward, and yet it’s not that simple to achieve – especially when it involves the continuous availability of large, high-volume databases. One of the inherent problems is that data replication for high-availability is filled with many nuances that need to be addressed for a successful deployment, including maintaining sub-second latency, active/active considerations, scalability options, conflict detection/resolution, recovery, exception processing, and verifying that the source/target are synchronized properly.

One of the problems that organizations face is the need to address lots of different business issues using, what often involves, multiple software packages. Integrating these different pieces of software – perhaps even from different vendors – can add an extra level of complexity to the job in hand. What those organizations really need is a single piece of software that’s flexible enough to provide a comprehensive solution for changed data capture, replication, enhancing existing ETL (Extract, Transform, and Load) processes, and data migrations/conversions. Quite a big ask.

Wouldn’t you be interested in software that offers industrial-strength, near-real-time data integration solutions that include high-performance Changed Data Capture (CDC), data replication, data synchronization, enhanced ETL and business event publishing? And what if it was equally simple to experience the high-speed delivery of mainframe data (IMS, DB2, VSAM, etc) into data warehouses and downstream applications? Too good to be true?

If you’re like me, you carry around a list of capabilities in your head, and tick them off – or more often don’t tick them off – when you give software the once over. So here’s the kind of things I’d have on my list for an integration engine. In general I’d expect:
  • Concurrent operation across multiple operating system platforms
  • Multi-step processes within a single script (UNION)
  • Simultaneous multi-record type file handling
  • Multi-level array handling (repeating groups) of source data store records/rows
  • Data filtering and cleansing
  • Dynamic look-up table processing
  • Support for data transfer and communication using TCP/IP and MQSeries
  • Preservation of referential integrity (RI) rules on target updates
  • Joins/Merges of heterogeneous databases/files.
In terms of data transformation I’d like to see:
  • Case (If/Else) logic
  • Extensive date cleansing and formatting
  • Arithmetic functions (add, subtract, multiply, etc)
  • Aggregation functions (sum, min, max, avg, etc)
  • Data type conversions
  • String functions
  • Data filtering
  • XML data formatting
  • Delimited data formatting.
When it comes to datastore processing I’d want:
  • High performance bulk data transfer
  • Concurrent processing of multiple data store types
  • Creation of target data stores from source data store format
  • Insert/append to existing target data stores
  • Update/replace existing target data stores
  • Delete from existing target data stores
  • New column/field creation Data Movement.
And for Data Movement, my list includes MQSeries, TCP/IP, and FTP.

If there was also some kind of Integration Center that had an easy-to-use Graphical User Interface (GUI) enabling users to quickly develop data integration interfaces from a single control point – that would be good. Additionally, some way to develop, deploy and maintain data interfaces, create relational DDL (Data Definition Language), XML (Extensible Mark-up Language ) and C/C++ structures from COBOL Copybooks, monitor the status of integration engines, and contain an integrated metadata repository – that would be a real plus.

I’d definitely want to find out more about a single piece of software that provided high-performance Changed Data Capture (CDC) and Apply, data replication, event publishing, Extract, Transformation, and Load (ETL), and data conversions/migrations.

So, if you’re like me and want to know more, there’s a webinar from SQData’s Scott Quillicy on 1 December at 2pm GMT (8am CST). To join the webinar from your PC, you need to register before the event at https://www1.gotomeeting.com/register/844029904. I’ll see you there.

Sunday, 13 November 2011

Guest blog – Mainframe security: who needs it?

This week, for a change, I’m publishing a blog entry from Peter Goldberg, a senior solution architect at Liaison Technologies, a global provider of cloud-based integration and data management services and solutions based in Atlanta. He works directly with customers to identify their unique data security and integration challenges and helps to design solutions to suit their organizations’ requirements. A frequent speaker at industry conferences on eBusiness security issues and solutions, he can be reached at pgoldberg@liaison.com.

I’ve been helping companies on both sides of the pond solve their data security problems for many years now. If I’ve learned one thing, it’s this: when I go into an organization that runs Windows, there’s little question of the need for data security. The organization knows it and so do I. When I visit a company whose IT infrastructure revolves around a mainframe, however, the mindset is often quite the opposite. In fact, the biggest data security misconception I encounter is the belief that the mainframe environment is inherently secure. Most IT staff view the mainframe as just another network node. Why? Because it’s universally perceived as a closed environment and, therefore, invulnerable to hackers.

In some cases, it’s the mainframe IT pros who hold this conviction. In other instances, it’s the executive management team. Lack of management attention allows “bad practices” to continue. I can tell you this without reserve: data stored in mainframes needs protection just as much as sensitive information stored on a Windows server or anywhere else. And, as systems continue to support more data, users, applications, and services, effective security management in the mainframe environment becomes significantly more difficult.

News flash: mainframes can be hacked!

For that simple reason, mainframe security should not be taken for granted.

Even though the mainframe is a mature platform, there is a real shortage of mainframe-specific security skills in the market. And, the few mainframe security practitioners who are out there spend a lot of time implementing configuration and controls within their environments as well as putting into place security systems like RACF, which provide access control and auditing functionality. As for other security measures, in my experience, the mainframe people know about encryption, but they’re not terribly aware of newer data security techniques like tokenization as it relates to protecting data within the mainframe environment and beyond.

Tokenization is a data security model that substitutes surrogate values for sensitive information in business systems. A rapidly rising method for reducing corporate risk and supporting compliance with data security standards and data privacy laws, it can be used to protect cardholder information as well as Personally Identifiable Information (PII) and Protected Health Information (PHI).

In fact, for companies that need to comply with the Payment Card Industry’s Data Security Standard (PCI DSS), tokenization has been lauded for its ability to reduce the cost of compliance by taking entire systems out of scope for PCI assessments. And, even in companies that do not deal with PCI DSS or other mandates, tokenization has proven effective for managing the duplication of data across LPARs and for facilitating the usage of potentially sensitive data for development purposes.

Too often, compliance audits skim over mainframe control weaknesses and there are also fewer mainframe-specific security guidelines. But this does not mean that significant risk is not there. You can apply a risk-based, defence-in-depth approach within the mainframe environment by using stronger mainframe host security controls and by using tokenization to protect the data itself.

To beef up data security on a mainframe, here’s my advice:
  1. Bring in mainframe security experts to identify and remediate risks, and to develop and enforce security policies and procedures.
  2. Develop in-house capabilities and skilled professionals across the mainframe platform to support security initiatives.
  3. Evaluate available security configuration and administration tools – there are some really good ones out there.
  4. Apply an in-depth security strategy that includes secure access and authentication controls, and use them appropriately.
  5. Adopt encryption and tokenization to protect sensitive information. Through their proper implementation, it’s really not that hard to achieve a true high level of protection within the mainframe environment.

Protecting sensitive and/or business-critical data is essential to a company’s reputation, profitability, and business objectives. In today’s global market, where business and personal information know no boundaries, traditional point solutions that protect certain devices or applications against specific risks are insufficient to provide cross-enterprise data security. Combining encryption and tokenization, along with centralized key management, as part of a corporate data protection programme works well – including in mainframe-centric environments – for protecting information while reducing corporate risk and the cost of compliance with data security mandates and data privacy laws.

Don’t be fooled: your mainframe isn’t inherently secure. Doing nothing is no longer an option!

Thanks Peter for your guest blog.
And remember, there's still time to complete the mainframe user survey or place a vendor entry in the Arcati Mainframe Yearbook 2012.

Saturday, 5 November 2011

Guide Share Europe – an impression

I could only make Day 1 of this year’s Guide Share Europe conference on the 1st and 2nd of November – which was a huge disappointment. For those of you who weren’t there, I thought I’d give you a flavour of my experience.

Firstly, it was at Whittlebury Hall again – which is a magnificent location just over the border from Buckinghamshire into Northamptonshire. The location is stunning and the facilities are excellent. It is in the countryside, so if you’re travelling by train, there’s a long taxi ride to get there. If you travel by car, there’s a huge car park.

The exhibition hall is big, but not so big you get lost in it. By having lunch and coffee in the hall, there were plenty of opportunities to engage with vendors and chat to other attendees. I always find it’s a great opportunity to catch up with old colleagues and make new friends. The quality of the coffee and food was good – which translates as excellent when compared to some venues!

But the point of GSE is not the food, it’s the presentations. I chair the Virtual IMS user group and the Virtual CICS user group, so I was torn between the CICS and IMS streams. In the end, I split my time between them. I watched Circle’s Ezriel Gross present on Using CICS to Deploy Microsoft .Net Winforms with Smart Client Technology – which was really fascinating. I’m sure we’re going to see more sites integrating their Windows technology with the power of mainframe subsystems. Ezriel made quite a complicated integration seem straightforward and obvious.

Next I watched IBM’s Alison Coughtrie talk about IMS 12 Overview. Another knowledgeable speaker with a lot of information to get over in the time. I certainly think I have a clearer idea of what’s new, and perhaps a small insight into where IBM is taking the product.

After lunch it was Neil Price, who works for TNT Express and chairs the IMS group for GSE, with a presentation entitled Memoirs of a HALDBA. I was so impressed with Neil’s real-life descriptions that I’ve asked him to speak to the Virtual IMS user group. Neil could have gone on for much longer than the time allowed. And I could happily have gone on listening.

Next up in the IMS stream was IBM’s Dougie Lawson. Dougie is another fantastically knowledgeable IBMer, who you may have come across when you’ve had an IMS problem. He talked about The Why and How of CSL. A real bits and bytes expert, who could have talked much longer.

I felt it was time to sit in on the CICS stream and the session I chose was IBM’s Ian Burnett talking about CICS Scalability. Yet again, a fact-filled presentation that would be hard to criticize. I felt my knowledge about CICS (and I used to edit CICS Update) making more sense and falling more into place.

But all work and no play makes Jack a dull boy – as they say. And the evening presentation was How To Cope With Pressure & Panics Without Going Into Headless Chicken Mode from Resli Costabell. A mixture of psychology, NLP, and audience participation made this a memorable session. If you get a chance to see her anywhere – don’t miss it!

After that there were drinks in the exhibition hall sponsored by Attachmate/Suse and Computacenter, followed by dinner sponsored by EMC and Computacenter. Both were very enjoyable in their own way, and they were an opportunity to chat more informally with vendors and real mainframe users. Obviously, I was telling vendors about sponsorship opportunities with the Arcati Mainframe Yearbook, and asking users to complete the user survey.

In conversation, I asked a few of the vendors how business was going. No-one admitted that double-dip recession was taking them out of business, but most suggested that they were keeping their heads above water and business generally was flat – but there was some business being done.

An IBMer suggested that over 30 z196s had been sold in the UK and eight of the new z114s. So, that’s good news for them.

My overall impression of the conference was that it was excellent. I bumped into Mark Wilson (the GSE technical coordinator) during the day as he rushed around making sure everything was going smoothly. And that’s why the conference works so well, because people like Mark work so hard to ensure it does.

Well done everyone who organized it and spoke at it. And if you missed it, go next year.

Sunday, 30 October 2011

Two things you thought would never happen at IBM

I guess any two pundits sitting in a room together 10 years ago and talking about IBM’s future would have been more likely to predict Star Trek-like beaming technology and computers you could talk to than a mainframe that integrated Windows servers and woman landing the top job at IBM.

And here we are. It’s almost November 2011, and both are about to come to pass.

The zEnterprise 196 and the Business Class version, the zEnterprise 114, mainframes come with the zEnterprise BladeCenter Extension. Initially this supported AIX on Power blades and Linux on x86 blades. This fit nicely with IBM’s model of the universe because it owns AIX and Linux is, of course, open source – ie it doesn’t belong to anybody. The Unified Resource Manager (URM) controls the operating systems and hypervisors on the mainframe and the blades. But now – the previously unthinkable – IBM promises that it will have Windows running on its HX5 Xeon-based blade servers for the zBX chassis before the end of this year.

Microsoft Windows Server 2008 R2 Datacenter Edition will run on the PS701 blade servers in the zBX enclosures. The zBX extension can have 112 PS701 blades or 28 HX5 blades.

This is clearly important for those sites that use mainframes or are ready to upgrade to mainframes and still have a big Windows-using population. It’s interesting that so many people consider Windows to be the de facto computing platform. I recently had a conversation where Windows laptops were given the metaphor of rats or beetles – they just turn up everywhere – and Linux was given the metaphor of a stealth operating system or a hidden shadow – it was everywhere, but you didn’t see it. Why stealth, well because Linux turns up behind the scenes on routers, on TiVO boxes, on supercomputers, as the precursor to Android on smartphones, making movies at Pixar and Dreamworks, in the military, governments, everywhere!

After Windows on IBM hardware, the next thing we hear is that Virginia M Rometty, a senior vice president at IBM, is going to be the company’s next CEO – starting in January. “Ginni”, aged 54 (as all the releases inform us), succeeds Samuel J Palmisano, who is 60, and will remain as chairman.

Ms Rometty graduated from Northwestern University with a degree in computer science, joined IBM in 1981 as a systems engineer. She moved through different management jobs, working with clients in a variety of industries. Her big coup was in 2002, when she played a major part in the  purchase of the very big consulting firm, PricewaterhouseCoopers Consulting. PwC staff were used to working in a different way from IBM’s and managing that culture shift was down to Ms Rometty.

In 2009, Ginni became senior vice president and group executive for sales, marketing, and strategy.

You’ll recall that Sam Palmisano took over in 2003 from Louis V Gerstner Jr, who’d joined IBM from RJR Nabisco in 1993 and helped turn round an ailing IBM. The previous incumbent had been the lacklustre John Akers.

I suppose with Siri on iPhones and the much less serious about itself Iris on Android, we’ve moved some way towards being able to talk to a computer – even if it is a smartphone. Still no sign of Scotty being beamed up, though!

Saturday, 22 October 2011

Guide Share Europe annual conference

The Guide Share Europe (GSE) UK Annual Conference is taking place on 1-2 November at Whittlebury Hall, Whittlebury, Near Towcester, Northamptonshire NN12 8QH, UK.

Sponsors this year include IBM, Computacentre, EMC, Attachmate, Suse, CA, Novell, Compuware, Intellimagic, RSM Partners, Velocity Software, and Zephyr. And there will be 30 vendors in the associated exhibition.

There’s the usual amazing range of streams – and, to be honest, there are a number of occasions when I would like to be in two or more places at once over the two days. The streams are: CICS, IMS, DB2, Enterprise Security, Large Systems Working Group, Network Management Working Group, Software Asset Management, Tivoli User Group TWS, Tivoli User Group Automation, MQ, New Technologies, zLinux, and the single-session Training & Certification.

That means that at this year’s conference there will be 126 hours of education covering most aspects of mainframe technology. This is slightly less than last year because two of the Tivoli streams that were included last years have been dropped because they were so poorly attended. This year, there will be 12 streams of ten sessions over the two days, plus five keynotes and that one training & certification WG meeting. In all, there are going to be 85 speakers delivering this training.

There is still time to register, and the organisers are expecting the daily total of delegates to exceed 300 – as it did last year. 

There are also 16 students attending this year, who are taking the mainframe course at UK universities. The majority of students are from the University of Western Scotland (UWS), but there will also be some from Liverpool John Moores University and possibly some more from other UK universities. The organisers have prepared a series of 101 sessions on mainframe architecture and infrastructure that will give these students as well as trainees and those unfamiliar with parts of the infrastructure a basic understanding of the mainframe and how it works.

Many GSE member companies are taking advantage of the five free places they get to send their staff to the conference. This would cost non-members £1000 in early-bird prices, and more than compensates member companies for the recent rise in the GSE membership fee to EUR 840.

You can find out more details about the conference at www.gse.org.uk/tyc/invite.html.

If you’re still debating whether to go, let me recommend it to you. The quality of presentations is always excellent. And the networking opportunities are brilliant. If you are going, I look forward to seeing you there.

Sunday, 16 October 2011

The Arcati Mainframe Yearbook 2012

The Arcati Mainframe Yearbook has been the de facto reference work for IT professionals working with z/OS (and its forerunner) systems since 2005. It includes an annual user survey, an up-to-date directory of vendors and consultants, a media guide, a strategy section with papers on mainframe trends and directions, a glossary of terminology, and a technical specification section. Each year, the Yearbook is downloaded by around 15,000 mainframe professionals. The current issue is still available at www.arcati.com/newyearbook11.

Very shortly, many of you will receive an e-mail informing you that Mark Lillycrop and I have started work on the 2012 edition of the Arcati Mainframe Yearbook. If you don’t get an e-mail from me about it, then e-mail trevor@itech-ed.com and I will add you to our mailing list.

As usual, we’re hoping that mainframe professionals will be willing to complete the annual user survey, which will shortly be up and running at www.arcati.com/usersurvey12. The more users who fill it in, the more accurate and therefore useful the survey report will be. All respondents before Friday 2 December will receive a free PDF copy of the survey results on publication. The identity and company information of all respondents is treated in confidence and will never be divulged to third parties. Any comments made by respondents will be anonymized also before publication. If you go to user group meetings, or just hang out with mainframers from other sites, please pass on the word about this survey. We’re hoping that this year’s user survey will be the most comprehensive survey ever. Current estimates suggest that there are somewhere between 6,000 and 8,000 companies using mainframes spread over 10,000 sites worldwide.

Anyone reading this who works for a vendor, consultant, or service provider, can ensure their company gets a free entry in the vendor directory section by completing the form at www.arcati.com/vendorentry. This form can also be used to amend last year’s entry.

As in previous years, there is the opportunity for organizations to sponsor the Yearbook or take out a half page advertisement. Half-page adverts (5.5in x 8in max landscape) cost $700 (UK£420). Sponsors get a full-page advert (11in x 8in) in the Yearbook; inclusion of a corporate paper in the Mainframe Strategy section of the Yearbook; a logo/link on the Yearbook download page on the Arcati Web site; and a brief text ad in the Yearbook publicity e-mails sent to users. Price $2100 (UK£1200).

To put that cost into perspective, for every dollar you spend on an advert you reach around 22 mainframe professionals.

The Arcati Mainframe Yearbook 2012 will be freely available for download early in January next year.