Sunday, 27 November 2011

Managing expectations

Have you ever been out for a few drinks with friends. Maybe you’ve had more to drink than usual. What happens next? Well the answer seems to depend on which country you and the people you’re drinking with come from.

It seems that in some countries, people take the view that alcohol is so strong and people are so weak that anything is permissible. You can stand up in court and explain your actions – whatever they may be – by saying that you’d drunk too much. In other countries – like Italy – alcohol is grouped with food in the minds of people. You drink when you eat. You eat and drink with your friends and family. Using the defence of excessive alcohol would seem as absurd as using the defence of having eaten too many burgers to explain antisocial behaviour.

And it’s exactly the same with users. If they expect nanosecond response times to a CICS transaction they will be miffed when a response takes a second or two. Whereas, if they are used to a response taking a few seconds, they will be pleased when it takes less than two seconds for their screen to refresh.

Managing expectations can be the difference between happy users and unhappy users. In the same way it can be the difference between alcoholic destruction of everything on the way home and a great night out.

Banks seem to use the opposite technique. They pretend that they offer great service, but as every customer knows, they don’t. The news is always full of demands that the banks should lone more – particularly to small businesses. Speaking as the owner of a small business, I think this is not the real problem. I think the problem for most small businesses is the fact that banks charge too much for their services.

Now I don’t mind banks charging for the work they do – that’s the same model I use to stay in business! What I object to is the amount they charge. And I think this is part of the problem most small businesses face. For example, here in the UK, I get a lot of dollar cheques from the USA. I get an exchange rate that’s clearly in the bank’s favour and then I get charged for paying the money into my account. I get charged for paying in UK cheques. And I get charged even more for paying in cash!

So I guess my expectations are that banks are going to rip me off. They do nothing to manage that and make things better. And they really are the reason that a lot of small businesses are having a hard time during this recession – or whatever we’re calling it.

Just revisiting the psychology again. There are experiments where two groups of students were given free drinks all evening. Both groups got equally drunk. Then the experimenters explained that one group had drunk alcohol and the other group hadn’t. Once this second group were told they hadn’t had any alcohol, they immediately sobered up. Their expectations changed completely and they now behaved in a different way.

So, while IT strives to offer the best service to its users. It’s important that conversations take place between the two groups so that users can describe their expectations of the service they want to receive, and IT can explain how the service is being delivered and give a realistic idea of what an end user shoould expect. Most sites have SLAs (Service Level Agreements), but these tend to be gathering dust somewhere rather than being constantly referred to. The importance of the conversation is to manage expectations and make sure both groups can continue to work, happy in the knowledge that they are getting or delivering the level of service that everyone expects.

Don’t forget that on Thursday 1 December there’s a webinar entitled: “How Important is the Continuous Availability of Your Critical Applications?” at 2pm GMT. You can register for the event at https://www1.gotomeeting.com/register/844029904.

And this is the last week that you can complete the Arcati Mainframe Yearbook user survey at http://www.arcati.com/usersurvey12. We need all the completed surveys by Friday evening.

Saturday, 19 November 2011

Continuous availability – no longer a dream?

Zero downtime is a goal that many companies are striving for. It sounds so straighforward, and yet it’s not that simple to achieve – especially when it involves the continuous availability of large, high-volume databases. One of the inherent problems is that data replication for high-availability is filled with many nuances that need to be addressed for a successful deployment, including maintaining sub-second latency, active/active considerations, scalability options, conflict detection/resolution, recovery, exception processing, and verifying that the source/target are synchronized properly.

One of the problems that organizations face is the need to address lots of different business issues using, what often involves, multiple software packages. Integrating these different pieces of software – perhaps even from different vendors – can add an extra level of complexity to the job in hand. What those organizations really need is a single piece of software that’s flexible enough to provide a comprehensive solution for changed data capture, replication, enhancing existing ETL (Extract, Transform, and Load) processes, and data migrations/conversions. Quite a big ask.

Wouldn’t you be interested in software that offers industrial-strength, near-real-time data integration solutions that include high-performance Changed Data Capture (CDC), data replication, data synchronization, enhanced ETL and business event publishing? And what if it was equally simple to experience the high-speed delivery of mainframe data (IMS, DB2, VSAM, etc) into data warehouses and downstream applications? Too good to be true?

If you’re like me, you carry around a list of capabilities in your head, and tick them off – or more often don’t tick them off – when you give software the once over. So here’s the kind of things I’d have on my list for an integration engine. In general I’d expect:
  • Concurrent operation across multiple operating system platforms
  • Multi-step processes within a single script (UNION)
  • Simultaneous multi-record type file handling
  • Multi-level array handling (repeating groups) of source data store records/rows
  • Data filtering and cleansing
  • Dynamic look-up table processing
  • Support for data transfer and communication using TCP/IP and MQSeries
  • Preservation of referential integrity (RI) rules on target updates
  • Joins/Merges of heterogeneous databases/files.
In terms of data transformation I’d like to see:
  • Case (If/Else) logic
  • Extensive date cleansing and formatting
  • Arithmetic functions (add, subtract, multiply, etc)
  • Aggregation functions (sum, min, max, avg, etc)
  • Data type conversions
  • String functions
  • Data filtering
  • XML data formatting
  • Delimited data formatting.
When it comes to datastore processing I’d want:
  • High performance bulk data transfer
  • Concurrent processing of multiple data store types
  • Creation of target data stores from source data store format
  • Insert/append to existing target data stores
  • Update/replace existing target data stores
  • Delete from existing target data stores
  • New column/field creation Data Movement.
And for Data Movement, my list includes MQSeries, TCP/IP, and FTP.

If there was also some kind of Integration Center that had an easy-to-use Graphical User Interface (GUI) enabling users to quickly develop data integration interfaces from a single control point – that would be good. Additionally, some way to develop, deploy and maintain data interfaces, create relational DDL (Data Definition Language), XML (Extensible Mark-up Language ) and C/C++ structures from COBOL Copybooks, monitor the status of integration engines, and contain an integrated metadata repository – that would be a real plus.

I’d definitely want to find out more about a single piece of software that provided high-performance Changed Data Capture (CDC) and Apply, data replication, event publishing, Extract, Transformation, and Load (ETL), and data conversions/migrations.

So, if you’re like me and want to know more, there’s a webinar from SQData’s Scott Quillicy on 1 December at 2pm GMT (8am CST). To join the webinar from your PC, you need to register before the event at https://www1.gotomeeting.com/register/844029904. I’ll see you there.

Sunday, 13 November 2011

Guest blog – Mainframe security: who needs it?

This week, for a change, I’m publishing a blog entry from Peter Goldberg, a senior solution architect at Liaison Technologies, a global provider of cloud-based integration and data management services and solutions based in Atlanta. He works directly with customers to identify their unique data security and integration challenges and helps to design solutions to suit their organizations’ requirements. A frequent speaker at industry conferences on eBusiness security issues and solutions, he can be reached at pgoldberg@liaison.com.

I’ve been helping companies on both sides of the pond solve their data security problems for many years now. If I’ve learned one thing, it’s this: when I go into an organization that runs Windows, there’s little question of the need for data security. The organization knows it and so do I. When I visit a company whose IT infrastructure revolves around a mainframe, however, the mindset is often quite the opposite. In fact, the biggest data security misconception I encounter is the belief that the mainframe environment is inherently secure. Most IT staff view the mainframe as just another network node. Why? Because it’s universally perceived as a closed environment and, therefore, invulnerable to hackers.

In some cases, it’s the mainframe IT pros who hold this conviction. In other instances, it’s the executive management team. Lack of management attention allows “bad practices” to continue. I can tell you this without reserve: data stored in mainframes needs protection just as much as sensitive information stored on a Windows server or anywhere else. And, as systems continue to support more data, users, applications, and services, effective security management in the mainframe environment becomes significantly more difficult.

News flash: mainframes can be hacked!

For that simple reason, mainframe security should not be taken for granted.

Even though the mainframe is a mature platform, there is a real shortage of mainframe-specific security skills in the market. And, the few mainframe security practitioners who are out there spend a lot of time implementing configuration and controls within their environments as well as putting into place security systems like RACF, which provide access control and auditing functionality. As for other security measures, in my experience, the mainframe people know about encryption, but they’re not terribly aware of newer data security techniques like tokenization as it relates to protecting data within the mainframe environment and beyond.

Tokenization is a data security model that substitutes surrogate values for sensitive information in business systems. A rapidly rising method for reducing corporate risk and supporting compliance with data security standards and data privacy laws, it can be used to protect cardholder information as well as Personally Identifiable Information (PII) and Protected Health Information (PHI).

In fact, for companies that need to comply with the Payment Card Industry’s Data Security Standard (PCI DSS), tokenization has been lauded for its ability to reduce the cost of compliance by taking entire systems out of scope for PCI assessments. And, even in companies that do not deal with PCI DSS or other mandates, tokenization has proven effective for managing the duplication of data across LPARs and for facilitating the usage of potentially sensitive data for development purposes.

Too often, compliance audits skim over mainframe control weaknesses and there are also fewer mainframe-specific security guidelines. But this does not mean that significant risk is not there. You can apply a risk-based, defence-in-depth approach within the mainframe environment by using stronger mainframe host security controls and by using tokenization to protect the data itself.

To beef up data security on a mainframe, here’s my advice:
  1. Bring in mainframe security experts to identify and remediate risks, and to develop and enforce security policies and procedures.
  2. Develop in-house capabilities and skilled professionals across the mainframe platform to support security initiatives.
  3. Evaluate available security configuration and administration tools – there are some really good ones out there.
  4. Apply an in-depth security strategy that includes secure access and authentication controls, and use them appropriately.
  5. Adopt encryption and tokenization to protect sensitive information. Through their proper implementation, it’s really not that hard to achieve a true high level of protection within the mainframe environment.

Protecting sensitive and/or business-critical data is essential to a company’s reputation, profitability, and business objectives. In today’s global market, where business and personal information know no boundaries, traditional point solutions that protect certain devices or applications against specific risks are insufficient to provide cross-enterprise data security. Combining encryption and tokenization, along with centralized key management, as part of a corporate data protection programme works well – including in mainframe-centric environments – for protecting information while reducing corporate risk and the cost of compliance with data security mandates and data privacy laws.

Don’t be fooled: your mainframe isn’t inherently secure. Doing nothing is no longer an option!

Thanks Peter for your guest blog.
And remember, there's still time to complete the mainframe user survey or place a vendor entry in the Arcati Mainframe Yearbook 2012.

Saturday, 5 November 2011

Guide Share Europe – an impression

I could only make Day 1 of this year’s Guide Share Europe conference on the 1st and 2nd of November – which was a huge disappointment. For those of you who weren’t there, I thought I’d give you a flavour of my experience.

Firstly, it was at Whittlebury Hall again – which is a magnificent location just over the border from Buckinghamshire into Northamptonshire. The location is stunning and the facilities are excellent. It is in the countryside, so if you’re travelling by train, there’s a long taxi ride to get there. If you travel by car, there’s a huge car park.

The exhibition hall is big, but not so big you get lost in it. By having lunch and coffee in the hall, there were plenty of opportunities to engage with vendors and chat to other attendees. I always find it’s a great opportunity to catch up with old colleagues and make new friends. The quality of the coffee and food was good – which translates as excellent when compared to some venues!

But the point of GSE is not the food, it’s the presentations. I chair the Virtual IMS user group and the Virtual CICS user group, so I was torn between the CICS and IMS streams. In the end, I split my time between them. I watched Circle’s Ezriel Gross present on Using CICS to Deploy Microsoft .Net Winforms with Smart Client Technology – which was really fascinating. I’m sure we’re going to see more sites integrating their Windows technology with the power of mainframe subsystems. Ezriel made quite a complicated integration seem straightforward and obvious.

Next I watched IBM’s Alison Coughtrie talk about IMS 12 Overview. Another knowledgeable speaker with a lot of information to get over in the time. I certainly think I have a clearer idea of what’s new, and perhaps a small insight into where IBM is taking the product.

After lunch it was Neil Price, who works for TNT Express and chairs the IMS group for GSE, with a presentation entitled Memoirs of a HALDBA. I was so impressed with Neil’s real-life descriptions that I’ve asked him to speak to the Virtual IMS user group. Neil could have gone on for much longer than the time allowed. And I could happily have gone on listening.

Next up in the IMS stream was IBM’s Dougie Lawson. Dougie is another fantastically knowledgeable IBMer, who you may have come across when you’ve had an IMS problem. He talked about The Why and How of CSL. A real bits and bytes expert, who could have talked much longer.

I felt it was time to sit in on the CICS stream and the session I chose was IBM’s Ian Burnett talking about CICS Scalability. Yet again, a fact-filled presentation that would be hard to criticize. I felt my knowledge about CICS (and I used to edit CICS Update) making more sense and falling more into place.

But all work and no play makes Jack a dull boy – as they say. And the evening presentation was How To Cope With Pressure & Panics Without Going Into Headless Chicken Mode from Resli Costabell. A mixture of psychology, NLP, and audience participation made this a memorable session. If you get a chance to see her anywhere – don’t miss it!

After that there were drinks in the exhibition hall sponsored by Attachmate/Suse and Computacenter, followed by dinner sponsored by EMC and Computacenter. Both were very enjoyable in their own way, and they were an opportunity to chat more informally with vendors and real mainframe users. Obviously, I was telling vendors about sponsorship opportunities with the Arcati Mainframe Yearbook, and asking users to complete the user survey.

In conversation, I asked a few of the vendors how business was going. No-one admitted that double-dip recession was taking them out of business, but most suggested that they were keeping their heads above water and business generally was flat – but there was some business being done.

An IBMer suggested that over 30 z196s had been sold in the UK and eight of the new z114s. So, that’s good news for them.

My overall impression of the conference was that it was excellent. I bumped into Mark Wilson (the GSE technical coordinator) during the day as he rushed around making sure everything was going smoothly. And that’s why the conference works so well, because people like Mark work so hard to ensure it does.

Well done everyone who organized it and spoke at it. And if you missed it, go next year.