It seems that as you get closer to Christmas and the New Year, everything starts to run down like an old clockwork toy. Meetings scheduled for 2013 seem ages away, even though some are at the beginning of January. And phone calls so often find the person you’re ringing out of the office – whether that’s doing a bit of family shopping or attending the numerous party and lunch invitations. And as for e-mail, people seem to leave their out-of-office message on permanently!
To be honest, I’ve been closeted away trying to get the bulk of the Arcati Mainframe Yearbook 2013 sorted out. I’ve been calculating the numbers for the user survey – and very interesting they are too. But I’ll talk more about them next year. I’ve been busy placing articles and organizing entries for vendors and consultants in the appropriate sections. So, no partying for me this week. BTW if you wanted to be included in the Yearbook, e-mail me immediately here.
I’ve also been working away at a speaker programme for the Virtual IMS user group and the Virtual CICS user group. The first meeting is on 15 January when IBM speakers will talk about the CICS V5.1 - Portfolio Update. But, as I say, it’s in January and seems an age away! If you or your company are interested presenting to either group during the year, then let me know here.
It seems that no matter whether you’re Christian, atheist, agnostic, or a person of any other faith, everyone likes a present from Father Christmas (Santa). So what do you buy the IT guy who seems to have pretty much everything? Somehow socks just don’t seem to show any real thought. Alcohol is usually well received, as are things from the more expensive end of the catalogue. But I’ve got a couple of other suggestions.
I’ve been playing with a magic magnetic balls cube. It’s 216 small magnetic balls that you can shape into all sorts of 3D things. And then you can scrunch them all up and start again. But the really hard part, is getting them back into the cube shape. Remember magnets repel as well as attract. So you can play with them for ages.
I’ve also been working with a variety of organizations on social media – getting Facebook fan pages set up, starting using Twitter, and putting videos on Youtube. And I found the like and dislike Facebook-like stamps can be quite fun. Whatever happened to the paperless office? As documents pass across my desk, I can (mainly for my own amusement) stamp them with a Like or Dislike (thumbs up or thumbs down). You’ll probably enjoy doing much the same!
There are lots more ideas at www.paramountzone.com, such as touch-screen gloves, iHat music hat, USB mug warmer, flying monkeys, a one million pound bank note, alcohol breath tester keyring, tap shower radio, and light changing glow ball.
And, for the mainframer in your life, you could try to get hold of a “Proud to be part of z” poster. You can see a copy here.
So, whatever you’re doing over the next couple of weeks, have a great time. See you next year.
Sunday, 16 December 2012
Sunday, 9 December 2012
Putting it all together
I was chatting to Tony Amies, product architect at William Data Systems, about the best way to get information from a mainframe (and about what’s happening on that mainframe) out to someone who has access to a browser. He had a number of nifty techniques that I’d like to share with you.
Assuming you want to get the information from your mainframe to a browser, your first really big decision is do you want to go with two-tier architecture, or do you want to go with three-tier architecture? Tony’s advice was that two-tier was enough.
That leads to the second really important decision to be made, which is where should the processing take place? You’ve got to look at what needs to be done and then make a choice about which platform does that work best on. So, for example, and perhaps quite obviously, if you want to monitor what’s going on with IP, FTP, or EE (Enterprise Extender) on your mainframe, you want that processing to take place on the mainframe. That all seems fairly easy!
But what do you want to do with your results from those different monitors (and indeed any other monitoring software you have running)? Do you want to look like mission control and have a 3270 screen for each monitor that’s running. Now, I remember how impressed visitors used to be when they saw something like that, but if my plan is to get the information to laptop or tablet device running a browser, then I probably need to consolidate the different feeds first. And, again, I want to do that on the mainframe, in the same LPAR (Logical PARtition) that the monitors are running in. It makes sense because much less work has to be done than if I choose to send that data somewhere else for processing.
If you’re familiar with WDS products, you know about their ZEN monitor for their other products. They’re written in Assemble and C and they use a little DLL inside each product that talks to a DLL in ZEN and you can see a nice display of what’s going on – on everything. As a side note, using the DLLs allows the products to talk to each other, so that if the IP monitor spots something, then the IP trace tool can be used to investigate further.
So, using the two-tier architecture, ZEN is logically split into two parts. One part is mainframe friendly and one part is Web friendly. Associated with the Web-friendly part of ZEN are the necessary JavaScript libraries and JQuery libraries, and the usual HTML bits. If you’ve not come across JQuery before, it’s like a lot of clever JavaScript that’s been already written that you can use to make your Web site look very modern. Lots of sites use the jQuery lightBox plugin to show photos.
There’s always a problem with busy Web pages in that lots of information has to come from the server to the browser, resulting in lots of network traffic – and if you’re monitoring your mainframe’s network from a browser there can be even more. So the first technique they’ve used is AJAX (Asynchronous JavaScript and XML), which means that only the parts of the Web page that have changed are sent, the rest of the page stays the same. They also use JSON (JavaScript Object Notation) as a lightweight data-interchange format.
But there’s always an issue with any User Interface and that’s how to separate the actual user interface itself from the business logic behind it. In technical terms, these are known as the view model and the data model. To make this work as effectively as possible, they’re using MVVM (Model View ViewModel), which is a framework or architectural pattern coming originally from Microsoft, but now extended to include JavaScript MVVM. You can get a clearer idea of the sorts of things they’ve done – and perhaps the sorts of things you could do from knockoutjs.com. The site describes itself as “simplifying dynamic javaScript UIs by applying the Model-View-ViewModel (MVVM) pattern”. The WDS developers also familiarized themselves with prototypejs.org, a site describing itself as “a foundation for ambitious web user interfaces”. They go on to say: “Prototype takes the complexity out of client-side web programming”.
For WDS, basically an object is requested from the browser by the user. This goes up to ZEN. Using JSON, a data object comes down and binds with the layout defined in the MVVM skeleton. As a user, you see dynamically updated information in your browser. If one new value affects other values, those other values are automatically update appropriately.
I’m not endorsing WDS’s products, I’m saying that they have very effectively made use of the very latest technology at the browser end to make their product work very efficiently. I’m suggesting that the two-tier architecture with really clever stuff on the browser side is definitely worth taking a look at.
Assuming you want to get the information from your mainframe to a browser, your first really big decision is do you want to go with two-tier architecture, or do you want to go with three-tier architecture? Tony’s advice was that two-tier was enough.
That leads to the second really important decision to be made, which is where should the processing take place? You’ve got to look at what needs to be done and then make a choice about which platform does that work best on. So, for example, and perhaps quite obviously, if you want to monitor what’s going on with IP, FTP, or EE (Enterprise Extender) on your mainframe, you want that processing to take place on the mainframe. That all seems fairly easy!
But what do you want to do with your results from those different monitors (and indeed any other monitoring software you have running)? Do you want to look like mission control and have a 3270 screen for each monitor that’s running. Now, I remember how impressed visitors used to be when they saw something like that, but if my plan is to get the information to laptop or tablet device running a browser, then I probably need to consolidate the different feeds first. And, again, I want to do that on the mainframe, in the same LPAR (Logical PARtition) that the monitors are running in. It makes sense because much less work has to be done than if I choose to send that data somewhere else for processing.
If you’re familiar with WDS products, you know about their ZEN monitor for their other products. They’re written in Assemble and C and they use a little DLL inside each product that talks to a DLL in ZEN and you can see a nice display of what’s going on – on everything. As a side note, using the DLLs allows the products to talk to each other, so that if the IP monitor spots something, then the IP trace tool can be used to investigate further.
So, using the two-tier architecture, ZEN is logically split into two parts. One part is mainframe friendly and one part is Web friendly. Associated with the Web-friendly part of ZEN are the necessary JavaScript libraries and JQuery libraries, and the usual HTML bits. If you’ve not come across JQuery before, it’s like a lot of clever JavaScript that’s been already written that you can use to make your Web site look very modern. Lots of sites use the jQuery lightBox plugin to show photos.
There’s always a problem with busy Web pages in that lots of information has to come from the server to the browser, resulting in lots of network traffic – and if you’re monitoring your mainframe’s network from a browser there can be even more. So the first technique they’ve used is AJAX (Asynchronous JavaScript and XML), which means that only the parts of the Web page that have changed are sent, the rest of the page stays the same. They also use JSON (JavaScript Object Notation) as a lightweight data-interchange format.
But there’s always an issue with any User Interface and that’s how to separate the actual user interface itself from the business logic behind it. In technical terms, these are known as the view model and the data model. To make this work as effectively as possible, they’re using MVVM (Model View ViewModel), which is a framework or architectural pattern coming originally from Microsoft, but now extended to include JavaScript MVVM. You can get a clearer idea of the sorts of things they’ve done – and perhaps the sorts of things you could do from knockoutjs.com. The site describes itself as “simplifying dynamic javaScript UIs by applying the Model-View-ViewModel (MVVM) pattern”. The WDS developers also familiarized themselves with prototypejs.org, a site describing itself as “a foundation for ambitious web user interfaces”. They go on to say: “Prototype takes the complexity out of client-side web programming”.
For WDS, basically an object is requested from the browser by the user. This goes up to ZEN. Using JSON, a data object comes down and binds with the layout defined in the MVVM skeleton. As a user, you see dynamically updated information in your browser. If one new value affects other values, those other values are automatically update appropriately.
I’m not endorsing WDS’s products, I’m saying that they have very effectively made use of the very latest technology at the browser end to make their product work very efficiently. I’m suggesting that the two-tier architecture with really clever stuff on the browser side is definitely worth taking a look at.
Labels:
3270,
AJAX,
blog,
browser,
Eddolls,
JavaScript,
JQuery,
JSON,
knockoutjs.com,
mainframe,
Model View ViewModel,
MVVM,
prototypejs.org,
Tony Amies,
William Data Systems
Sunday, 2 December 2012
Computer futures
It’s December. It’s the time of the year when people are either reviewing what kind of a year it has been for them or predicting the future. I thought that this week I’d look forward to next year.
The big news for mainframe users is, of course, the price hike promised by IBM. If you go to http://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/9/897/ENUS312-129/index.html&lang=en&request_locale=en you can see that from 1 July, the price of Flat Workload License Charges (FWLC) will increase.
According to Timothy Prickett Morgan writing for The Register: “IBM has a wide variety of monthly software pricing schemes for its System z mainframes, but the FLMC scheme is interesting in that it applies to all machines regardless of size or vintage equally and as you increase the capacity of the mainframe, the software fees stay flat.
“The bad thing about the FWLC scheme is that it does not have what IBM calls sub-capacity pricing, where customers use virtualization to isolate capacity on a particular mainframe and then only get charged for that software based on the MSUs consumed in that logical partition.” MSUs are Metered Service Units. It looks like the average price rise will be around the 10 percent level.
We probably won’t be making use of Application Performance Management Software as a Service (perhaps more easily written as APM SaaS). A survey conducted by IDG Research Services online among members of the CIO Forum on LinkedIn during August found that 61 percent of organizations have no plans to implement APM SaaS. But around a quarter (24 percent) already use APM SaaS in some capacity, with a mere 4 percent are using an APM SaaS vendor to monitor all their critical applications.
A CA Technologies survey found that 80 percent of Australian organizations are expected to face a shortage of mainframe skills in the future, with 57 percent already experiencing difficulties. The skills shortage issue is one that IBM, CA, and other companies are addressing with graduate and undergraduate programmes of study on mainframes.
The good news from the survey is that the mainframe is also playing an increasingly strategic role in managing the evolving needs of the enterprise. Again this comes as a surprise to no-one who knows about mainframes. With the growth in use of Linux on the mainframe, organizations can save lots of their budget. And the new hybrid models allow sites to get the best of all worlds.
The survey also found that 36 percent of respondents anticipate an increase in hardware spending in the next 12 to 18 months. Good news for hardware vendors. 44 percent of respondents are planning to increase their spending on mainframe-related services.
And while our focus is on mainframes, we all use laptops, tablets, and smartphones, so it’s interesting to see that Steven Sinofsky has left Microsoft. Who’s he, you say? Well, he was one of the driving forces behind Windows 8. Similarly, Scott Forstall has left Apple. Both were working to get laptops, tablets, and smartphones to use much the same interface and appear to the user to all have the same look-and-feel. Perhaps that touchscreen-style way of working will make its way to the interface to mainframe applications? Or perhaps in many ways it has in so far as remote access to mainframes can be achieved over the Internet from a browser on any platform.
I definitely predict more things will decide they are ‘cloud’ things. Mainframe users have been saying all along that they used to sit at a terminal and not worry where the application software lived, or where the data was stored – they just knew it was ‘out there’ and got on with their work. I’m sure this ‘cavalier’ attitude is one most users would like to be able to embrace. We’ve all had the problem with a file being on our office computer when we need it in the evening or weekend, or being on our home computer on Monday morning when we’re in the office. IT departments can sweat the security issues, but users will love the idea of it all (data and apps) being out there and available from anywhere.
Finally, if you’re a vendor, don’t forget to update your information in the Arcati Mainframe Yearbook 2013 – you can do it here. And if you’re a mainframe user, then help us by completing the user survey here.
The big news for mainframe users is, of course, the price hike promised by IBM. If you go to http://www-01.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_ca/9/897/ENUS312-129/index.html&lang=en&request_locale=en you can see that from 1 July, the price of Flat Workload License Charges (FWLC) will increase.
According to Timothy Prickett Morgan writing for The Register: “IBM has a wide variety of monthly software pricing schemes for its System z mainframes, but the FLMC scheme is interesting in that it applies to all machines regardless of size or vintage equally and as you increase the capacity of the mainframe, the software fees stay flat.
“The bad thing about the FWLC scheme is that it does not have what IBM calls sub-capacity pricing, where customers use virtualization to isolate capacity on a particular mainframe and then only get charged for that software based on the MSUs consumed in that logical partition.” MSUs are Metered Service Units. It looks like the average price rise will be around the 10 percent level.
We probably won’t be making use of Application Performance Management Software as a Service (perhaps more easily written as APM SaaS). A survey conducted by IDG Research Services online among members of the CIO Forum on LinkedIn during August found that 61 percent of organizations have no plans to implement APM SaaS. But around a quarter (24 percent) already use APM SaaS in some capacity, with a mere 4 percent are using an APM SaaS vendor to monitor all their critical applications.
A CA Technologies survey found that 80 percent of Australian organizations are expected to face a shortage of mainframe skills in the future, with 57 percent already experiencing difficulties. The skills shortage issue is one that IBM, CA, and other companies are addressing with graduate and undergraduate programmes of study on mainframes.
The good news from the survey is that the mainframe is also playing an increasingly strategic role in managing the evolving needs of the enterprise. Again this comes as a surprise to no-one who knows about mainframes. With the growth in use of Linux on the mainframe, organizations can save lots of their budget. And the new hybrid models allow sites to get the best of all worlds.
The survey also found that 36 percent of respondents anticipate an increase in hardware spending in the next 12 to 18 months. Good news for hardware vendors. 44 percent of respondents are planning to increase their spending on mainframe-related services.
And while our focus is on mainframes, we all use laptops, tablets, and smartphones, so it’s interesting to see that Steven Sinofsky has left Microsoft. Who’s he, you say? Well, he was one of the driving forces behind Windows 8. Similarly, Scott Forstall has left Apple. Both were working to get laptops, tablets, and smartphones to use much the same interface and appear to the user to all have the same look-and-feel. Perhaps that touchscreen-style way of working will make its way to the interface to mainframe applications? Or perhaps in many ways it has in so far as remote access to mainframes can be achieved over the Internet from a browser on any platform.
I definitely predict more things will decide they are ‘cloud’ things. Mainframe users have been saying all along that they used to sit at a terminal and not worry where the application software lived, or where the data was stored – they just knew it was ‘out there’ and got on with their work. I’m sure this ‘cavalier’ attitude is one most users would like to be able to embrace. We’ve all had the problem with a file being on our office computer when we need it in the evening or weekend, or being on our home computer on Monday morning when we’re in the office. IT departments can sweat the security issues, but users will love the idea of it all (data and apps) being out there and available from anywhere.
Finally, if you’re a vendor, don’t forget to update your information in the Arcati Mainframe Yearbook 2013 – you can do it here. And if you’re a mainframe user, then help us by completing the user survey here.
Sunday, 25 November 2012
End of an era - goodbye Lotus
It was announced this week that IBM is dropping the Lotus brand from its much-loved Notes and Domino workgroup products. That’s the end of an era for a brand that first saw the light of day 30 years ago (in 1982). From Version 9.0 onwards, we’ll be calling it IBM Notes.
Let’s turn back time to 1982. We’ll gloss over the clothes, the music, and the politics, and remind ourselves about that king of spreadsheets – Lotus 1-2-3. Mitch Kapor founded Lotus Development Corporation in Cambridge, Massachusetts, USA in that year. Lotus 1-2-3 was so popular that many people bought PCs simply to use it!
Over the years, a number of other products came from Lotus – with names such as Approach, cc:Mail, Hal, Improv, Jazz, Manuscript, Magellan, Organizer, and Symphony. You can just feel the memories as you say the names – although the truth is that they weren’t all wildly successful. But with the triumph of Microsoft Office in the desktop environment, Lotus could still hold its head high. It had the best groupware product around – Notes.
In the 1990s, every presentation seemed to be how client/server was the only computing model worth considering, and Notes’ combination of messaging and database fitted the bill perfectly. And as soon as PCs could produce enough power for it to work, Notes took off.
And then in 1995, in the spirit of the old Remington catchphrase, IBM liked the product so much, they bought the company! And they paid a whopping $3.5 billion for it. IBM added Lotus Domino (the server-side version of Notes), and the product became popular for collaborative working.
Lotus ran some major jamborees, labelled Lotusphere, where the faithful could get the latest news on products and developments, and talk to like-minded users. At Xephon, where I was working in the 1990s, we published Notes Update and then Notes/Domino Update, with me as the editor.
IBM also acquired Ray Ozzie, who’s company Iris Associates, developed Notes for Lotus. But soon Ozzie was on the move and formed Groove Networks, which was taken over by Microsoft. In 2006 he became Chief Architect at Microsoft, but left at the end of 2010. His new company is called Cocomo.
So, the good news is that Notes and Domino go on. The sad news is that the Lotus brand name finally disappears.
Meanwhile, don’t forget to complete the mainframe users survey at www.arcati.com/usersurvey13. And vendors - make sure your free entry in the Arcati Mainframe Yearbook is up-to-date by going to www.arcati.com/vendorentry.
Let’s turn back time to 1982. We’ll gloss over the clothes, the music, and the politics, and remind ourselves about that king of spreadsheets – Lotus 1-2-3. Mitch Kapor founded Lotus Development Corporation in Cambridge, Massachusetts, USA in that year. Lotus 1-2-3 was so popular that many people bought PCs simply to use it!
Over the years, a number of other products came from Lotus – with names such as Approach, cc:Mail, Hal, Improv, Jazz, Manuscript, Magellan, Organizer, and Symphony. You can just feel the memories as you say the names – although the truth is that they weren’t all wildly successful. But with the triumph of Microsoft Office in the desktop environment, Lotus could still hold its head high. It had the best groupware product around – Notes.
In the 1990s, every presentation seemed to be how client/server was the only computing model worth considering, and Notes’ combination of messaging and database fitted the bill perfectly. And as soon as PCs could produce enough power for it to work, Notes took off.
And then in 1995, in the spirit of the old Remington catchphrase, IBM liked the product so much, they bought the company! And they paid a whopping $3.5 billion for it. IBM added Lotus Domino (the server-side version of Notes), and the product became popular for collaborative working.
Lotus ran some major jamborees, labelled Lotusphere, where the faithful could get the latest news on products and developments, and talk to like-minded users. At Xephon, where I was working in the 1990s, we published Notes Update and then Notes/Domino Update, with me as the editor.
IBM also acquired Ray Ozzie, who’s company Iris Associates, developed Notes for Lotus. But soon Ozzie was on the move and formed Groove Networks, which was taken over by Microsoft. In 2006 he became Chief Architect at Microsoft, but left at the end of 2010. His new company is called Cocomo.
So, the good news is that Notes and Domino go on. The sad news is that the Lotus brand name finally disappears.
Meanwhile, don’t forget to complete the mainframe users survey at www.arcati.com/usersurvey13. And vendors - make sure your free entry in the Arcati Mainframe Yearbook is up-to-date by going to www.arcati.com/vendorentry.
Labels:
1-2-3,
blog,
Cocomo,
Domino,
Eddolls,
Groove Networks,
IBM,
Iris Associates,
Lotus,
Microsoft,
Mitch Kapor,
Notes,
Ray Ozzie,
Xephon
Sunday, 18 November 2012
Guide Share Europe - my impression
Yet again, I could only get to the first day of this year’s Guide Share Europe conference on the 13 and 14 November – which was shame. I hoped I could arrange my meetings to take place at the conference, but there we are. So, I thought that, for those people who were unable to make even the first day, I’d give you a flavour of what you missed.
As usual, the conference was at Whittlebury Hall – which is near Silverstone, but just into Northamptonshire. The location is stunning, so a walk outside to get a breath of air is always worthwhile.
The exhibition hall is big, and busy, and its where lunch and coffees are served, making it easy to chat to the vendors and other attendees. I always find it’s a great opportunity to put faces to people I usually talk to by e-mail, to catch up with old colleagues, and make new friends. The quality of the food is always good, particularly the conference dinner.
With 14 streams and pretty much 10 sessions in each stream, it can be hard to decide which sessions to attend. I chair the Virtual IMS user group and the Virtual CICS user group, so I split my time between the CICS and IMS streams.
The first session I went to, I saw IBM’s Steve Foley and the new CICS Director, Danny Mace, talk about CICS TS Version 5.1. They highlighted improvements in CICS capacity and scalability suggesting users could have two or three times more workloads per region. They spoke about new autonomic policy-based management, and increased availability including being able to refresh SSL certificates without a CICS restart. They also spent some time clarifying the concepts of what an ‘application’ is and what a ‘platform’ is. You may think you already know, but this way, it becomes easier to move an application to a different platform. Steve also ran over some of the portfolio updates too. And the session ended with an interesting conversation about pricing.
Next I headed for a presentation about IMS 13 by Paul Fletcher (also from IBM). He said that the new version came with improvements in performance, reductions in storage usage, and reduced TCO (Total Cost of Ownership). The HALDB ALTER and the DEDB ALTER commands allow changes to be made without needing a database outage. You can have multiple views of a physical database in the IMS catalog. And he talked about IMS Connect enhancements.
After lunch it was back to the CICS stream to see IBM’s Inderpal Singh talk about CICS and cloud computing. He said that an ‘application’ contained a collection of CICS bundles. It seemed that no-one in the room was using CICS bundles at this time. He also spoke about the life-cycle of an application – install, enable, disable, and discard. It’s that final stage that doesn’t seem to happen at most installations, where things are left in-place, just in case they’re needed in the future. You can create a CICS bundle in CICS Explorer using the new-look Cloud Explorer interface for CICS.
Next up in the IMS stream was IBM’s Dougie Lawson. Dougie is another fantastically knowledgeable IBMer, who you may have come across when you’ve had an IMS problem. He talked about DRD and the IMS repository – and tried to break the record for the most slides you can show in an hour!. He explained that the repository is a data store of resource information and the repository catalog is points to repository and is quite different from the IMS Catalog.
The final technical session of the afternoon was an old friend of the Virtual IMS user group, GT Software’s Dusty Rivers, talking about IMS modernization. The main thrust of his presentation was about making data and applications available on other platforms. He suggested that mainframe modernization means different things to different people and he grouped these into adding a Web look-and-feel, getting access to mainframe data, getting to mainframe business logic, the need to consolidate logic, the need to reduce MIPS, and the need to integrate with new technologies (such as cloud and smartphones).
After a day full of so many technical presentations, you might think people would give an extra session a miss. But it was standing room only for the always-excellent Resli Costabell, who talked about dealing with ‘difficult’ people. Her definition of difficult people, is that they are just people that you’ve run out of skills to deal with! And she said that the power in such a situation is with the person being difficult – so we should accept them as they are and stay calm. She even gave us a strategy. You talk as if you’re talking to a friend and say: “I know you’re only [insert positive intention], and [insert negative effect], so [and here you ask for what you want them to do]”. She also suggested that whatever someone does, they do with the best intentions, and they are doing the best that they know how. Resli explained to the group that some people look for similarities between what you’re telling them and their experience, and some look for differences. In the latter case, again she had a strategy. Tell them that they won’t like what you have to say and that there are only a few situations where they can use it. Your difficult person will immediately find lots of ways that they can use it (which is really what you wanted all along!). A really great session – and I haven’t even mentioned Mark Wilson’s contribution!!
Over drinks in the exhibition hall sponsored by Vanguard, Attachmate, and Suse, and an excellent dinner sponsored by Computacenter and PKWARE, I chatted more informally with vendors and real mainframe users. As usual, I was encouraging vendors to complete their entry in the Arcati Mainframe Yearbook, and explaining the benefits of sponsoring it. And I was asking mainframe users to complete the user survey.
My overall impression of the conference was that it was excellent. Mark Wilson (the GSE technical coordinator) and his colleagues made sure everything worked well. I picked up loads of information, and had a really good day.
Well done everyone who organized it and spoke at it. And if you missed it, go next year.
As usual, the conference was at Whittlebury Hall – which is near Silverstone, but just into Northamptonshire. The location is stunning, so a walk outside to get a breath of air is always worthwhile.
The exhibition hall is big, and busy, and its where lunch and coffees are served, making it easy to chat to the vendors and other attendees. I always find it’s a great opportunity to put faces to people I usually talk to by e-mail, to catch up with old colleagues, and make new friends. The quality of the food is always good, particularly the conference dinner.
With 14 streams and pretty much 10 sessions in each stream, it can be hard to decide which sessions to attend. I chair the Virtual IMS user group and the Virtual CICS user group, so I split my time between the CICS and IMS streams.
The first session I went to, I saw IBM’s Steve Foley and the new CICS Director, Danny Mace, talk about CICS TS Version 5.1. They highlighted improvements in CICS capacity and scalability suggesting users could have two or three times more workloads per region. They spoke about new autonomic policy-based management, and increased availability including being able to refresh SSL certificates without a CICS restart. They also spent some time clarifying the concepts of what an ‘application’ is and what a ‘platform’ is. You may think you already know, but this way, it becomes easier to move an application to a different platform. Steve also ran over some of the portfolio updates too. And the session ended with an interesting conversation about pricing.
Next I headed for a presentation about IMS 13 by Paul Fletcher (also from IBM). He said that the new version came with improvements in performance, reductions in storage usage, and reduced TCO (Total Cost of Ownership). The HALDB ALTER and the DEDB ALTER commands allow changes to be made without needing a database outage. You can have multiple views of a physical database in the IMS catalog. And he talked about IMS Connect enhancements.
After lunch it was back to the CICS stream to see IBM’s Inderpal Singh talk about CICS and cloud computing. He said that an ‘application’ contained a collection of CICS bundles. It seemed that no-one in the room was using CICS bundles at this time. He also spoke about the life-cycle of an application – install, enable, disable, and discard. It’s that final stage that doesn’t seem to happen at most installations, where things are left in-place, just in case they’re needed in the future. You can create a CICS bundle in CICS Explorer using the new-look Cloud Explorer interface for CICS.
Next up in the IMS stream was IBM’s Dougie Lawson. Dougie is another fantastically knowledgeable IBMer, who you may have come across when you’ve had an IMS problem. He talked about DRD and the IMS repository – and tried to break the record for the most slides you can show in an hour!. He explained that the repository is a data store of resource information and the repository catalog is points to repository and is quite different from the IMS Catalog.
The final technical session of the afternoon was an old friend of the Virtual IMS user group, GT Software’s Dusty Rivers, talking about IMS modernization. The main thrust of his presentation was about making data and applications available on other platforms. He suggested that mainframe modernization means different things to different people and he grouped these into adding a Web look-and-feel, getting access to mainframe data, getting to mainframe business logic, the need to consolidate logic, the need to reduce MIPS, and the need to integrate with new technologies (such as cloud and smartphones).
After a day full of so many technical presentations, you might think people would give an extra session a miss. But it was standing room only for the always-excellent Resli Costabell, who talked about dealing with ‘difficult’ people. Her definition of difficult people, is that they are just people that you’ve run out of skills to deal with! And she said that the power in such a situation is with the person being difficult – so we should accept them as they are and stay calm. She even gave us a strategy. You talk as if you’re talking to a friend and say: “I know you’re only [insert positive intention], and [insert negative effect], so [and here you ask for what you want them to do]”. She also suggested that whatever someone does, they do with the best intentions, and they are doing the best that they know how. Resli explained to the group that some people look for similarities between what you’re telling them and their experience, and some look for differences. In the latter case, again she had a strategy. Tell them that they won’t like what you have to say and that there are only a few situations where they can use it. Your difficult person will immediately find lots of ways that they can use it (which is really what you wanted all along!). A really great session – and I haven’t even mentioned Mark Wilson’s contribution!!
Over drinks in the exhibition hall sponsored by Vanguard, Attachmate, and Suse, and an excellent dinner sponsored by Computacenter and PKWARE, I chatted more informally with vendors and real mainframe users. As usual, I was encouraging vendors to complete their entry in the Arcati Mainframe Yearbook, and explaining the benefits of sponsoring it. And I was asking mainframe users to complete the user survey.
My overall impression of the conference was that it was excellent. Mark Wilson (the GSE technical coordinator) and his colleagues made sure everything worked well. I picked up loads of information, and had a really good day.
Well done everyone who organized it and spoke at it. And if you missed it, go next year.
Labels:
blog,
Conference,
Danny Mace,
Dougie Lawson,
Dusty Rivers,
Eddolls,
Europe,
GSE,
GT Software,
Guide,
IBM,
Inderpal Singh,
Paul Fletcher,
Resli Costabell,
Share,
Steve Foley,
Whittlebury Hall
Saturday, 10 November 2012
How important is e-mail?
As the bulk of the mainframe population gets older, you can expect that some of them will retire. You can also expect, in any job, that people will get promotions or transfer to other organizations. And so it comes as no surprise to me when I e-mail my list of members of the CICS or IMS virtual user groups each month that perhaps one or two will be undeliverable – like Elvis, they’ve left the building. And each year when I e-mail people about the Arcati Mainframe Yearbook user survey, I’m even less surprised to find their e-mails are bouncing.
It seems a pretty fair assumption to make – if an e-mail address stops working, then the person is no longer working for that company. And in my case, I tend to delete them off my list. I guess it’s what most people do in order to keep their e-mail lists up-to-date. Just delete the bounce-back/undeliverable e-mail addresses.
And that strategy seemed to make sense up until late last Friday night – that’s over a week ago!
I never give e-mail a thought these days. I’ve been using it forever. I’ve been using the Internet since the days of Bulletin Boards. I can access my e-mail on my phone. E-mail has always been just there. It’s like electricity and running water – I know there’s an infrastructure that delivers it to my home, but, most of time, I just take it for granted. It’s there, I use it, and I don’t give it a second thought!
I have a personal e-mail address and a company account. I use Yahoo and Gmail. I can get to my e-mail on anyone’s laptop, on my tablet, and, like I said, on my phone. I can choose to check my e-mail during the day, wherever I am. I can check it last thing at night and first thing in the morning. I get Listserv e-mails from CICS and IMS sites. I get newsletters and other consolidation information. I get Google alerts about mainframe news. And, of course, I get lots of spam. Or, at least, I did, up until last Friday evening.
I tend to keep my e-mail open all day and I answer e-mails during a break from whatever else I’m working on. It acts like a refresher – unless the e-mail is causing more work, in which case I flag it and come back to deal with it later. I keep abreast of what’s going on and what people are saying. I can then answer press or client inquiries as they come in. I use e-mail to send articles to various publishers.
So, that’s not very different from hundreds of other people. I use e-mail a lot. I get around 50 e-mails every hour (plus spam). Some are just trying to sell me something, but a lot are work-related and useful.
But, late last Friday evening, my e-mail stopped working. Or, to be more precise (and sound less like a typical end-user!), my e-mail forwarding stopped. All the people writing to trevor, admin, virtualims, virtualcics, arcati, and lots of other addresses @itech-ed.com had their e-mails flagged as undeliverable. As far as they were concerned my company had disappeared. I, along with everyone else here, had probably retired, resigned, or just left!
It was all to do with my IT provider being taken over. The new company ‘updated’ my service – and took away all my e-mail forwarding. By Monday, I was concerned. By Tuesday, I was very concerned. All these bouncebacks meant people would be thinking not only that I didn’t exist, but my whole company had disappeared. By Wednesday my frantic back-and-forth with my provider was leaving me tearing my hair out. And here we are now. Over a week of frantic messaging and tweeting by me still hasn’t made a difference. Come on Dotster – I need my e-mail working. I need people to know that iTech-Ed Ltd is still in business.
I now have a much greater appreciation of e-mail and its importance to any business. And how a glitch can cause no end of problems. It’s as if the water and electricity to my office had been turned off!
If you have any suggestions about what I can do next, I’d love to hear from you.
It seems a pretty fair assumption to make – if an e-mail address stops working, then the person is no longer working for that company. And in my case, I tend to delete them off my list. I guess it’s what most people do in order to keep their e-mail lists up-to-date. Just delete the bounce-back/undeliverable e-mail addresses.
And that strategy seemed to make sense up until late last Friday night – that’s over a week ago!
I never give e-mail a thought these days. I’ve been using it forever. I’ve been using the Internet since the days of Bulletin Boards. I can access my e-mail on my phone. E-mail has always been just there. It’s like electricity and running water – I know there’s an infrastructure that delivers it to my home, but, most of time, I just take it for granted. It’s there, I use it, and I don’t give it a second thought!
I have a personal e-mail address and a company account. I use Yahoo and Gmail. I can get to my e-mail on anyone’s laptop, on my tablet, and, like I said, on my phone. I can choose to check my e-mail during the day, wherever I am. I can check it last thing at night and first thing in the morning. I get Listserv e-mails from CICS and IMS sites. I get newsletters and other consolidation information. I get Google alerts about mainframe news. And, of course, I get lots of spam. Or, at least, I did, up until last Friday evening.
I tend to keep my e-mail open all day and I answer e-mails during a break from whatever else I’m working on. It acts like a refresher – unless the e-mail is causing more work, in which case I flag it and come back to deal with it later. I keep abreast of what’s going on and what people are saying. I can then answer press or client inquiries as they come in. I use e-mail to send articles to various publishers.
So, that’s not very different from hundreds of other people. I use e-mail a lot. I get around 50 e-mails every hour (plus spam). Some are just trying to sell me something, but a lot are work-related and useful.
But, late last Friday evening, my e-mail stopped working. Or, to be more precise (and sound less like a typical end-user!), my e-mail forwarding stopped. All the people writing to trevor, admin, virtualims, virtualcics, arcati, and lots of other addresses @itech-ed.com had their e-mails flagged as undeliverable. As far as they were concerned my company had disappeared. I, along with everyone else here, had probably retired, resigned, or just left!
It was all to do with my IT provider being taken over. The new company ‘updated’ my service – and took away all my e-mail forwarding. By Monday, I was concerned. By Tuesday, I was very concerned. All these bouncebacks meant people would be thinking not only that I didn’t exist, but my whole company had disappeared. By Wednesday my frantic back-and-forth with my provider was leaving me tearing my hair out. And here we are now. Over a week of frantic messaging and tweeting by me still hasn’t made a difference. Come on Dotster – I need my e-mail working. I need people to know that iTech-Ed Ltd is still in business.
I now have a much greater appreciation of e-mail and its importance to any business. And how a glitch can cause no end of problems. It’s as if the water and electricity to my office had been turned off!
If you have any suggestions about what I can do next, I’d love to hear from you.
Sunday, 4 November 2012
What’s really going on?
One of the problems with being a mainframer these days is finding out what’s going on at other sites and being able to compare your experiences with other people’s. There used to be rooms full of mainframe staff, and regular turnover meant that new ideas were easily examined. Apart from Google, nowadays you can keep in the loop by joining a user group (like the Virtual IMS and Virtual CICS user groups that don’t require you to be out of the office to attend meetings), going to conferences (you’ve just missed IOD, but Guide Share Europe takes place next week), or you can read survey results.
Or there’s survey results. And as we’re coming to the end of another year, surveys seem to be happening more frequently.
BMC Software recently published a survey from their AsiaPacific area. They found there was a growth in processor engines, which was driven by transaction processing requirements. And driving that is users wanting to have data available to them at any time and anywhere – particularly users with mobile devices. BMC also found that some of the growth could be attributed to legacy application development and some to newer applications where the mainframe provides the back-end delivery of that application.
Respondents liked mainframes for security and data integrity reasons, as well as its centralized manageability. One result that won’t surprise anyone who’s investigated the option was that the cost of moving away from the mainframe is very high!
CA Technologies has published a survey of 800 IT and business leaders. They found that IT and business leaders often have two different views on innovation, with IT respondents suggesting that they are more likely to position themselves as driving innovation, being an expert on innovation, and having the required skills to foster innovation. Business executives identified IT’s shortcomings in regard to its ability to support and drive innovation and gave themselves credit for innovation.
With results that could have been published any time up to the 1990s, the survey found large gaps can be found in rating IT’s knowledge of the business, IT’s business and communications skills, and overall speed and agility. I thought this division had healed over many years ago, but maybe the pendulum is swinging back the other way? Perhaps the paucity of IT staff makes it harder for them to get to meetings and interact with other execs? Or maybe the CIO (Chief Information Officer) is disappearing from organizations and IT is being relegated into a silo all over again?
Not surprisingly, the survey found common frustrations such as organizations’ lack of agility, and budget and staff resource shortages. Interestingly, new IT initiatives include mobile and business intelligence/analytics. Organizations reporting high levels of innovation are also planning investments in cloud, security management, business analytics, service management, and virtualization.
If you want to have your say about what’s happening at your site, the Arcati Mainframe Yearbook is conducting a user survey now. You can find the survey at www.arcati.com/usersurvey13.
If you’re interested in the Virtual IMS or CICS user groups, you can find them at www.fundi.com/virtualims/ and www.fundi.com/virtualcics/ respectively. More information about the GSE conference is at www.gse.org.uk/tyc/.
Or there’s survey results. And as we’re coming to the end of another year, surveys seem to be happening more frequently.
BMC Software recently published a survey from their AsiaPacific area. They found there was a growth in processor engines, which was driven by transaction processing requirements. And driving that is users wanting to have data available to them at any time and anywhere – particularly users with mobile devices. BMC also found that some of the growth could be attributed to legacy application development and some to newer applications where the mainframe provides the back-end delivery of that application.
Respondents liked mainframes for security and data integrity reasons, as well as its centralized manageability. One result that won’t surprise anyone who’s investigated the option was that the cost of moving away from the mainframe is very high!
CA Technologies has published a survey of 800 IT and business leaders. They found that IT and business leaders often have two different views on innovation, with IT respondents suggesting that they are more likely to position themselves as driving innovation, being an expert on innovation, and having the required skills to foster innovation. Business executives identified IT’s shortcomings in regard to its ability to support and drive innovation and gave themselves credit for innovation.
With results that could have been published any time up to the 1990s, the survey found large gaps can be found in rating IT’s knowledge of the business, IT’s business and communications skills, and overall speed and agility. I thought this division had healed over many years ago, but maybe the pendulum is swinging back the other way? Perhaps the paucity of IT staff makes it harder for them to get to meetings and interact with other execs? Or maybe the CIO (Chief Information Officer) is disappearing from organizations and IT is being relegated into a silo all over again?
Not surprisingly, the survey found common frustrations such as organizations’ lack of agility, and budget and staff resource shortages. Interestingly, new IT initiatives include mobile and business intelligence/analytics. Organizations reporting high levels of innovation are also planning investments in cloud, security management, business analytics, service management, and virtualization.
If you want to have your say about what’s happening at your site, the Arcati Mainframe Yearbook is conducting a user survey now. You can find the survey at www.arcati.com/usersurvey13.
If you’re interested in the Virtual IMS or CICS user groups, you can find them at www.fundi.com/virtualims/ and www.fundi.com/virtualcics/ respectively. More information about the GSE conference is at www.gse.org.uk/tyc/.
Labels:
Arcati,
blog,
BMC Software,
CA Technologies,
Eddolls,
mainframe,
mainframer,
survey,
user,
Yearbook
Sunday, 28 October 2012
IOD news
In a week that saw Windows 8 and that funny iPad mini being announced, the really hot news was coming out of IBM’s Information On Demand (IOD) conference in Vegas.
Running under the ‘Think Big’ tag we saw IBM announce IBM PureData System, InfoSphere Guardium V9.0, InfoSphere Information Server V9.1, InfoSphere Master Data Management V10.1, IBM Cognos Insight Personal Edition, IBM Cognos 10.2, IBM Cognos TM1.1.1, IBM Cognos Disclosure Management, IBM Analytical Decision Management, IBM SPSS Statistics, IBM SPSS Modeler, Intelligent Investigation Manager, Patient Care and Insights, Datacap Taskmaster Capture, and Content Manager OnDemand for Multiplatforms.
The third member in IBM’s PureSystems family is the PureData System, which is designed for big data cloud appliances by providing data services to various applications. This new PureData System family comprises: PureData System for Transactions, which is aimed at improving data management costs; and PureData System for Analytics, which is designed to analyse large volumes of data.
InfoSphere Guardium V9.0: introduces Hadoop Activity Monitoring to protect sensitive data in Big Data environments; enhances data security for System z with improved performance, resiliency and scalability; further reduces TCO and provides simplified scalability with Guardium grid/load balancing; introduces Security Content Automation Protocol (SCAP) reporting in Vulnerability Assessment; and extends in-depth data security with new security solutions integrations, such as Security Intelligence with QRadar, and IDS (Intrusion Detection System) insight with F5.
InfoSphere Information Server V9.1 provides a scalable, secure, and robust data integration platform. V9.1 offers new features to help rationalize increasingly complex environment and address evolving data integration requirements, so that data is accessible, authoritative, consistent, timely, and in context for analysis and reporting.
InfoSphere Master Data Management V10.1 builds on the unification of InfoSphere Master Data Management Server, Initiate Master Data Service, and InfoSphere Master Data Management Collaboration Server. New features include: Master Data Policy Management; enhanced Business Process Management (BPM) integration capabilities and sample workflows; enhanced probabilistic matching and searching; enhancements to the InfoSphere MDM Application Toolkit; advanced Rules Management; and advanced Catalog Management.
Cognos Insight Personal Edition, IBM’s personal analytics solution enables business users and analysts to access business information and share it with others. IBM Cognos 10.2 lets users import personal or business data from spreadsheets or csv files, select visualizations of the data, drill-down for trends, and more. They can use drag-and-drop data visualization, what-if scenario modelling, and dashboard style delivery.
IBM Cognos TM1.1.1 delivers capabilities in personal desktop analysis for data exploration, prototyping, and results sharing for personal, workgroup, and enterprise levels. Users get: easier modelling for planning and analysis solutions; improved server performance with the new IBM Cognos TM1 Operations Console; and advances in distributed architecture for increased scale and interactivity.
The IBM Analytical Decision Management integrates predictive analytics, local rules, scoring, and optimization techniques into an organization’s systems, and then delivers real-time recommendations at the point of impact so organization can consistently drive better outcomes.
IBM SPSS Modeler is a high-performance predictive and text analytics workbench that enables organizations to gain insight from their data and deliver a positive return on investment by embedding predictive analytics into business processes.
Intelligent Investigation Manager optimizes fraud investigation and analysis for insurance, banking, healthcare, and other customers, and public safety programmes for government and law enforcement organizations. It dynamically coordinates and reports on cases, provides analysis and visualization, and enables more efficient and effective investigations.
Patient Care and Insights is an integrated and configurable set of solutions that brings together advanced analytics and care management capabilities to help healthcare organizations maximize the value of information for treating patients.
Content Manager OnDemand for Multiplatforms delivers instant access to bills, statements, and other print archive information. This helps them quickly respond to customer inquiries and manage access to electronic reports used to communicate activities and performance across an organization Content Manager OnDemand helps organizations reduce print, storage and distribution costs in support of a greener content delivery solution.
Obviously, much more detail can be found on IBM’s Web site.
Running under the ‘Think Big’ tag we saw IBM announce IBM PureData System, InfoSphere Guardium V9.0, InfoSphere Information Server V9.1, InfoSphere Master Data Management V10.1, IBM Cognos Insight Personal Edition, IBM Cognos 10.2, IBM Cognos TM1.1.1, IBM Cognos Disclosure Management, IBM Analytical Decision Management, IBM SPSS Statistics, IBM SPSS Modeler, Intelligent Investigation Manager, Patient Care and Insights, Datacap Taskmaster Capture, and Content Manager OnDemand for Multiplatforms.
The third member in IBM’s PureSystems family is the PureData System, which is designed for big data cloud appliances by providing data services to various applications. This new PureData System family comprises: PureData System for Transactions, which is aimed at improving data management costs; and PureData System for Analytics, which is designed to analyse large volumes of data.
InfoSphere Guardium V9.0: introduces Hadoop Activity Monitoring to protect sensitive data in Big Data environments; enhances data security for System z with improved performance, resiliency and scalability; further reduces TCO and provides simplified scalability with Guardium grid/load balancing; introduces Security Content Automation Protocol (SCAP) reporting in Vulnerability Assessment; and extends in-depth data security with new security solutions integrations, such as Security Intelligence with QRadar, and IDS (Intrusion Detection System) insight with F5.
InfoSphere Information Server V9.1 provides a scalable, secure, and robust data integration platform. V9.1 offers new features to help rationalize increasingly complex environment and address evolving data integration requirements, so that data is accessible, authoritative, consistent, timely, and in context for analysis and reporting.
InfoSphere Master Data Management V10.1 builds on the unification of InfoSphere Master Data Management Server, Initiate Master Data Service, and InfoSphere Master Data Management Collaboration Server. New features include: Master Data Policy Management; enhanced Business Process Management (BPM) integration capabilities and sample workflows; enhanced probabilistic matching and searching; enhancements to the InfoSphere MDM Application Toolkit; advanced Rules Management; and advanced Catalog Management.
Cognos Insight Personal Edition, IBM’s personal analytics solution enables business users and analysts to access business information and share it with others. IBM Cognos 10.2 lets users import personal or business data from spreadsheets or csv files, select visualizations of the data, drill-down for trends, and more. They can use drag-and-drop data visualization, what-if scenario modelling, and dashboard style delivery.
IBM Cognos TM1.1.1 delivers capabilities in personal desktop analysis for data exploration, prototyping, and results sharing for personal, workgroup, and enterprise levels. Users get: easier modelling for planning and analysis solutions; improved server performance with the new IBM Cognos TM1 Operations Console; and advances in distributed architecture for increased scale and interactivity.
The IBM Analytical Decision Management integrates predictive analytics, local rules, scoring, and optimization techniques into an organization’s systems, and then delivers real-time recommendations at the point of impact so organization can consistently drive better outcomes.
IBM SPSS Modeler is a high-performance predictive and text analytics workbench that enables organizations to gain insight from their data and deliver a positive return on investment by embedding predictive analytics into business processes.
Intelligent Investigation Manager optimizes fraud investigation and analysis for insurance, banking, healthcare, and other customers, and public safety programmes for government and law enforcement organizations. It dynamically coordinates and reports on cases, provides analysis and visualization, and enables more efficient and effective investigations.
Patient Care and Insights is an integrated and configurable set of solutions that brings together advanced analytics and care management capabilities to help healthcare organizations maximize the value of information for treating patients.
Content Manager OnDemand for Multiplatforms delivers instant access to bills, statements, and other print archive information. This helps them quickly respond to customer inquiries and manage access to electronic reports used to communicate activities and performance across an organization Content Manager OnDemand helps organizations reduce print, storage and distribution costs in support of a greener content delivery solution.
Obviously, much more detail can be found on IBM’s Web site.
Labels:
announcements,
blog,
Eddolls,
IBM,
Information On Demand,
IOD,
Think BIG
Sunday, 21 October 2012
Guide Share Europe annual conference
The Guide Share Europe (GSE) UK Annual Conference is taking place on 13-14 November at Whittlebury Hall, Whittlebury, Near Towcester, Northamptonshire NN12 8QH, UK.
Sponsors this year include IBM, Computacenter, PKWARE, CA Technologies, Attachmate Suse, Vanguard, Compuware, GT Software, RSM Partners, Blenheim Software, Sett, and Red Hat. And there will be over 30 vendors in the associated exhibition.
There’s the usual amazing range of streams – and, to be honest, there are a number of occasions when I would like to be in two or more places at once over the two days. The streams are: CICS, IMS, DB2, Enterprise Security, Large Systems Working Group, Network Management Working Group, zLinux, Storage Management, TWS (Tivoli Workload Schedular), Automation, New Technologies, Software Asset Management, MQ, Application Development, and Assembler Training.
There are also keynotes from Mark Anzani, VP and CTO for System z at IBM. And there’s the not-to-be-missed “Dealing with difficult people” session from Resli Costabell.
That means that at this year’s conference there will be well over 140 hours of education covering most aspects of mainframe technology – more than last year. This year, there will be 14 streams of ten sessions over the two days, plus Assembler Training, plus four keynotes.
There is still time to register, and the organizers are expecting the daily total of delegates to exceed 300 – as it did last year.
A number of the streams have 101 sessions at different times during the two days to give newcomers and those unfamiliar with parts of the mainframe infrastructure a basic understanding of the that mainframe technology and how it works.
You can find out more details about the conference at www.gse.org.uk/tyc/.
If you’re still debating whether to go, let me recommend it to you. The quality of presentations is always excellent. And the networking opportunities are brilliant. If you are going, I look forward to seeing you there.
Sponsors this year include IBM, Computacenter, PKWARE, CA Technologies, Attachmate Suse, Vanguard, Compuware, GT Software, RSM Partners, Blenheim Software, Sett, and Red Hat. And there will be over 30 vendors in the associated exhibition.
There’s the usual amazing range of streams – and, to be honest, there are a number of occasions when I would like to be in two or more places at once over the two days. The streams are: CICS, IMS, DB2, Enterprise Security, Large Systems Working Group, Network Management Working Group, zLinux, Storage Management, TWS (Tivoli Workload Schedular), Automation, New Technologies, Software Asset Management, MQ, Application Development, and Assembler Training.
There are also keynotes from Mark Anzani, VP and CTO for System z at IBM. And there’s the not-to-be-missed “Dealing with difficult people” session from Resli Costabell.
That means that at this year’s conference there will be well over 140 hours of education covering most aspects of mainframe technology – more than last year. This year, there will be 14 streams of ten sessions over the two days, plus Assembler Training, plus four keynotes.
There is still time to register, and the organizers are expecting the daily total of delegates to exceed 300 – as it did last year.
A number of the streams have 101 sessions at different times during the two days to give newcomers and those unfamiliar with parts of the mainframe infrastructure a basic understanding of the that mainframe technology and how it works.
You can find out more details about the conference at www.gse.org.uk/tyc/.
If you’re still debating whether to go, let me recommend it to you. The quality of presentations is always excellent. And the networking opportunities are brilliant. If you are going, I look forward to seeing you there.
Labels:
Attachmate Suse,
Blenheim Software,
blog,
CA,
Computacenter,
Compuware,
Conference,
Eddolls,
Europe,
GT Software,
Guide,
IBM,
Mark Anzani,
PKWARE,
Red Hat,
Resli Costabell,
RSM,
Sett,
Share,
Vanguard
Sunday, 14 October 2012
IOD – so near yet so far away!
The IBM Information On Demand 2012 global conference is one of those conferences that you just don’t want to miss. And yet, like me, when you’re based in the UK, that’s just what’s going to happen. I won’t be rubbing shoulders with the people attending, listening to the speakers, or visiting the exhibitors stands. But I will be keeping up-to-date with what’s going on!
IOD 2012 runs from this coming Sunday, 21 October, to Thursday 25 at Mandalay Bay resort Las Vegas, Nevada, USA. This year, the old IBM slogan, “Think”, has been turned into “Think BIG”. And the whole conference is billed as the largest System z software event in the world. The Web site at www-01.ibm.com/software/os/systemz/conference/iod/ goes on to say that it delivers “the know-how you need to optimize business performance, drive more value from existing business analytics deployments, and get the latest innovations and success strategies from IBM experts, analysts, and customers.”
I chair the Virtual IMS user group – you can find us at www.fundi.com/virtualims – so I’m very interested in the IMS stream at IOD, particularly Betty J Patterson’s (a Distinguished Engineer with IBM) session, “Taking IMS to New Heights: What the Future Holds for IMS” and Dinesh Nirmal’s (IMS Director, IBM) “IMS Futures Roadmap” keynote seminar. But there is so much more of interest. You can see the session details at www-01.ibm.com/software/os/systemz/conference/iod/ims.html.
It’s not just IMS that’s featured, there are DB2 sessions, business analytics/data warehousing sessions, and tools and utilities sessions, totalling, they say, 180 executive, business leadership, and technical sessions in all.
And while it might seem unfair to highlight any of the many sessions going on at IOD, I guess that’s what everyone has to do in order to set a personal agenda for the four days. I’d like to attend some of the client-led sessions and see what real users are doing with their mainframes. What problems they’ve been experiencing and learn how they’ve overcome them. And I’d like to get my hands dirty in the hands-on labs and workshops. Perhaps dig a little deeper than I’ve ever had the chance to do with a production system.
Going to IOD lets you chat to other users. To find out that perhaps you’re not alone dealing with the pressures of fewer staff and tightening budgets, and listening to what tools and techniques other people have used to overcome these issues – and perhaps get an early warning about issues that haven’t come your way yet!
I can’t be there in person, but as an IBM Champion for the past four years, you may see my picture on display. I will definitely be using my laptop and tablet to keep up-to-date with the news and information already coming from the IOD 2012 Web site. I’ll be reading press releases and the blogs (www.ibmiodblog.com/), and I’ll be watching out for people tweeting and reading with interest what they have to say. And I’m hoping that my friends, who are there, will keep me in the loop with regular and frequent e-mails.
And to those of you who are lucky enough to attend – enjoy!
IOD 2012 runs from this coming Sunday, 21 October, to Thursday 25 at Mandalay Bay resort Las Vegas, Nevada, USA. This year, the old IBM slogan, “Think”, has been turned into “Think BIG”. And the whole conference is billed as the largest System z software event in the world. The Web site at www-01.ibm.com/software/os/systemz/conference/iod/ goes on to say that it delivers “the know-how you need to optimize business performance, drive more value from existing business analytics deployments, and get the latest innovations and success strategies from IBM experts, analysts, and customers.”
I chair the Virtual IMS user group – you can find us at www.fundi.com/virtualims – so I’m very interested in the IMS stream at IOD, particularly Betty J Patterson’s (a Distinguished Engineer with IBM) session, “Taking IMS to New Heights: What the Future Holds for IMS” and Dinesh Nirmal’s (IMS Director, IBM) “IMS Futures Roadmap” keynote seminar. But there is so much more of interest. You can see the session details at www-01.ibm.com/software/os/systemz/conference/iod/ims.html.
It’s not just IMS that’s featured, there are DB2 sessions, business analytics/data warehousing sessions, and tools and utilities sessions, totalling, they say, 180 executive, business leadership, and technical sessions in all.
And while it might seem unfair to highlight any of the many sessions going on at IOD, I guess that’s what everyone has to do in order to set a personal agenda for the four days. I’d like to attend some of the client-led sessions and see what real users are doing with their mainframes. What problems they’ve been experiencing and learn how they’ve overcome them. And I’d like to get my hands dirty in the hands-on labs and workshops. Perhaps dig a little deeper than I’ve ever had the chance to do with a production system.
Going to IOD lets you chat to other users. To find out that perhaps you’re not alone dealing with the pressures of fewer staff and tightening budgets, and listening to what tools and techniques other people have used to overcome these issues – and perhaps get an early warning about issues that haven’t come your way yet!
I can’t be there in person, but as an IBM Champion for the past four years, you may see my picture on display. I will definitely be using my laptop and tablet to keep up-to-date with the news and information already coming from the IOD 2012 Web site. I’ll be reading press releases and the blogs (www.ibmiodblog.com/), and I’ll be watching out for people tweeting and reading with interest what they have to say. And I’m hoping that my friends, who are there, will keep me in the loop with regular and frequent e-mails.
And to those of you who are lucky enough to attend – enjoy!
Labels:
Betty J Patterson,
blog,
Conference,
Dinesh Nirmal,
Eddolls,
IBM,
Information On Demand,
IOD,
Think BIG
Sunday, 7 October 2012
The Arcati Mainframe Yearbook 2013
The Arcati Mainframe Yearbook has been the de facto reference work for IT professionals working with z/OS (and its forerunner) systems since 2005. It includes an annual user survey, an up-to-date directory of vendors and consultants, a media guide, a strategy section with papers on mainframe trends and directions, a glossary of terminology, and a technical specification section. Each year, the Yearbook is downloaded by around 15,000 mainframe professionals. The current issue is still available at www.arcati.com/newyearbook12.
Very shortly, many mainframe professionals will receive an e-mail telling them that Mark Lillycrop and I have started work on the 2013 edition of the Arcati Mainframe Yearbook. If you don’t hear from us, then e-mail trevor@itech-ed.com and I will add you to our mailing list.
As in previous years, we’re inviting mainframe professionals to complete the annual user survey, which will shortly be up and running at www.arcati.com/usersurvey13. The more users who complete the survey, the more accurate and therefore useful the survey report will be. All respondents before Friday 7 December will receive a free PDF copy of the survey results on publication. The identity and company information of all respondents is treated in confidence and will never be divulged to third parties. And any comments made by respondents will be anonymized before publication. If you go to user group meetings, IOD, GCE Europe, etc, or just hang out with mainframers from other sites, please pass on the word about this survey. We’re hoping that this year’s user survey will be the most comprehensive survey ever. Current estimates suggest that there are somewhere between 6,000 and 8,000 companies using mainframes spread over 10,000 sites worldwide.
Anyone reading this who works for a vendor, consultant, or service provider, can ensure their company gets a free entry in the vendor directory section by completing the form, which will be at www.arcati.com/vendorentry. This form can also be used to amend last year’s entry.
Also, as in previous years, there is an opportunity for organizations to sponsor the Yearbook or take out a half-page advertisement. Half-page adverts (5.5in x 8.5in max landscape) cost $750 (UK£460). Sponsors get a full-page advert (11in x 8.5in) in the Yearbook; inclusion of a corporate paper in the Mainframe Strategy section of the Yearbook; a logo/link on the Yearbook download page on the Arcati Web site; and a brief text ad in the Yearbook publicity e-mails sent to users. All this for just $2200 (UK£1300).
To put that cost into perspective: for every dollar you spend on an advert, you reach around 20 mainframe professionals.
The Arcati Mainframe Yearbook 2013 will be freely available for download early in January next year.
Very shortly, many mainframe professionals will receive an e-mail telling them that Mark Lillycrop and I have started work on the 2013 edition of the Arcati Mainframe Yearbook. If you don’t hear from us, then e-mail trevor@itech-ed.com and I will add you to our mailing list.
As in previous years, we’re inviting mainframe professionals to complete the annual user survey, which will shortly be up and running at www.arcati.com/usersurvey13. The more users who complete the survey, the more accurate and therefore useful the survey report will be. All respondents before Friday 7 December will receive a free PDF copy of the survey results on publication. The identity and company information of all respondents is treated in confidence and will never be divulged to third parties. And any comments made by respondents will be anonymized before publication. If you go to user group meetings, IOD, GCE Europe, etc, or just hang out with mainframers from other sites, please pass on the word about this survey. We’re hoping that this year’s user survey will be the most comprehensive survey ever. Current estimates suggest that there are somewhere between 6,000 and 8,000 companies using mainframes spread over 10,000 sites worldwide.
Anyone reading this who works for a vendor, consultant, or service provider, can ensure their company gets a free entry in the vendor directory section by completing the form, which will be at www.arcati.com/vendorentry. This form can also be used to amend last year’s entry.
Also, as in previous years, there is an opportunity for organizations to sponsor the Yearbook or take out a half-page advertisement. Half-page adverts (5.5in x 8.5in max landscape) cost $750 (UK£460). Sponsors get a full-page advert (11in x 8.5in) in the Yearbook; inclusion of a corporate paper in the Mainframe Strategy section of the Yearbook; a logo/link on the Yearbook download page on the Arcati Web site; and a brief text ad in the Yearbook publicity e-mails sent to users. All this for just $2200 (UK£1300).
To put that cost into perspective: for every dollar you spend on an advert, you reach around 20 mainframe professionals.
The Arcati Mainframe Yearbook 2013 will be freely available for download early in January next year.
Labels:
advertise,
Arcati,
blog,
consultants,
directory,
Eddolls,
glossary,
mainframe,
Mark Lillycrop,
media guide,
specification,
sponsor,
strategy,
terminology,
trends,
user survey,
vendors,
Yearbook,
z/OS
Sunday, 30 September 2012
IMS's HALDBA again
Last time we looked at the first part of Neil Price’s presentation to the Virtual IMS user group back in February. Neil is a Senior DBA and Systems Programmer with TNT Express ICS. Here’s some more highlights from his presentation.
Neil suggested that at his site key-range partitions can be so different from each other that they need to be treated like independent databases. For example if the volume of data in one partition is growing much faster than in the others, it might need a lot more free space so that you don’t have to resize or split it too soon, and you might want to set the free space warning threshold lower, so that you have time to react. Another example is where the average database record length in one partition is much lower than in the others. If you want to be alerted when the record lengths increase, the threshold for this partition would need to be correspondingly lower.
When it comes to tuning – what Neil called “Performance Tweaks” – they have various processes where values are obtained in physical sequential order from the root keys of one database and used to access one or more related databases with the same root key. If the databases involved are all HDAM and use the DFSHDC40 randomizing routine, this works well because the randomizer tends to keep keys in more or less the same sequence regardless of the total number of Root Anchor Points, as long as this is much greater than the number of roots. This means that all the databases are accessed in physical sequential order, which more or less guarantees that each database block is read only once and so minimizes I/O, as well as generating a sequential access pattern that maximizes read hits in disk cache and more recently has enabled them to take advantage of OSAM Sequential Buffering. But once you go to PHDAM, the keys are only kept in this sequence within each partition. Optimal performance is only restored when all the databases involved are HALDBs with identical partitioning arrangements.
Neil Price identified something that he said confuses a lot of people, including some in IBM support, not least because the manuals are unclear. Normally when a KSDS data CI fills up, it split roughly in half. That’s not good for ascending keys such as the timestamps used by most of their HIDAM databases, because the old or “lower” CI will stay half-empty forever. In the same way, a CA split will leave the old CA with half its CIs empty and the rest usually half-empty, in other words only about a quarter full. If the insert rates are high, the KSDS can end up with a very high proportion of unusable free space and might approach the 4GB limit. However if Sequential Mode is used for the inserts, CI and CA splits are done at the point of the insert – which for ascending keys means that it starts a new, empty CI or CA and leaves the old one as it is. This is good for performance – CA splits have been known to take seconds to complete – as well as greatly slowing down the dataset growth.
It’s possible to specify sequential-mode inserts in the IMS buffer pool specification member by coding INSERT=SEQ on the OPTIONS statement, but that would apply to every KSDS. Instead they include a DBD statement for each index they want treated this way and specify FREESPACE=YES, which is the dataset-level equivalent. Sequential-mode inserts also honour the free space specifications for the KSDS. This means that if free space is specified and all inserts are done this way, the free space will never get used and is a complete waste of space. All their inserts are done through the online system, so FREESPACE=YES implies that there should be no free space specified for the cluster!
FREESPACE=YES would work almost as well with constantly descending keys. It might also be beneficial for some indexes whose keys are generally, rather than strictly, ascending or descending, but until they’ve got IMS 12 and can change buffer pool parameters dynamically it’s going to be difficult to determine. Currently they specify FREESPACE=YES for 6 of their PSINDEXes, of which only one has a strictly ascending key and the rest start with a date. There used to be more, but after analysing the VSAM statistics, Neil reverted some to normal inserts and non-zero free space. This reduced the rate of CA splits, ie growth, and also the overall I/O. Neil reminded us that PSINDEX records can be relatively large, which makes the total index much larger than its non-HALDB version. It also means that fewer records are retrieved in one I/O, which is bad for sequential processing.
For Neil’s site, most of their indexes are accessed with a partial key, which in some cases will match hundreds of entries. Neil attempted to minimize the effects by making the VSAM buffer pools bigger, although he knew he wouldn’t be able to get the I/O rates back down to pre-HALDB levels. He reasoned that the buffers ought to hold the same number of VSAM data records as before, so he aimed to increase the number of buffers in line with the increase in the record length. When he calculated the new subpool sizes he discovered that some came out greater than the limit of 32,767 buffers, and in any case there wasn’t enough virtual storage available for such large increases, so he had to scale them all back anyway. But the I/O rates for most of the OSAM subpools are higher than for any of the VSAM ones, so this is maybe not the area with the most potential for I/O reduction.
To sum up, Neil informed the user group that HALDBs can change your life if you’re a DBA! Once you’ve converted your databases, that’s when the fun really begins
You might like to know that the Virtual IMS user group can be found at www.fundi.com/virtualims. The next meeting is on 9 October at 11:30am EDT, when the guest speaker will be Aurora Emanuela Dell’Anno, a Senior Engineering Services Architect with CA Technology, who’ll be talking about “IMS performance – taming the beast”. It will be well worth attending – and being a virtual meeting, there’s no need to leave your desk. See the Web site for more details.
Neil suggested that at his site key-range partitions can be so different from each other that they need to be treated like independent databases. For example if the volume of data in one partition is growing much faster than in the others, it might need a lot more free space so that you don’t have to resize or split it too soon, and you might want to set the free space warning threshold lower, so that you have time to react. Another example is where the average database record length in one partition is much lower than in the others. If you want to be alerted when the record lengths increase, the threshold for this partition would need to be correspondingly lower.
When it comes to tuning – what Neil called “Performance Tweaks” – they have various processes where values are obtained in physical sequential order from the root keys of one database and used to access one or more related databases with the same root key. If the databases involved are all HDAM and use the DFSHDC40 randomizing routine, this works well because the randomizer tends to keep keys in more or less the same sequence regardless of the total number of Root Anchor Points, as long as this is much greater than the number of roots. This means that all the databases are accessed in physical sequential order, which more or less guarantees that each database block is read only once and so minimizes I/O, as well as generating a sequential access pattern that maximizes read hits in disk cache and more recently has enabled them to take advantage of OSAM Sequential Buffering. But once you go to PHDAM, the keys are only kept in this sequence within each partition. Optimal performance is only restored when all the databases involved are HALDBs with identical partitioning arrangements.
Neil Price identified something that he said confuses a lot of people, including some in IBM support, not least because the manuals are unclear. Normally when a KSDS data CI fills up, it split roughly in half. That’s not good for ascending keys such as the timestamps used by most of their HIDAM databases, because the old or “lower” CI will stay half-empty forever. In the same way, a CA split will leave the old CA with half its CIs empty and the rest usually half-empty, in other words only about a quarter full. If the insert rates are high, the KSDS can end up with a very high proportion of unusable free space and might approach the 4GB limit. However if Sequential Mode is used for the inserts, CI and CA splits are done at the point of the insert – which for ascending keys means that it starts a new, empty CI or CA and leaves the old one as it is. This is good for performance – CA splits have been known to take seconds to complete – as well as greatly slowing down the dataset growth.
It’s possible to specify sequential-mode inserts in the IMS buffer pool specification member by coding INSERT=SEQ on the OPTIONS statement, but that would apply to every KSDS. Instead they include a DBD statement for each index they want treated this way and specify FREESPACE=YES, which is the dataset-level equivalent. Sequential-mode inserts also honour the free space specifications for the KSDS. This means that if free space is specified and all inserts are done this way, the free space will never get used and is a complete waste of space. All their inserts are done through the online system, so FREESPACE=YES implies that there should be no free space specified for the cluster!
FREESPACE=YES would work almost as well with constantly descending keys. It might also be beneficial for some indexes whose keys are generally, rather than strictly, ascending or descending, but until they’ve got IMS 12 and can change buffer pool parameters dynamically it’s going to be difficult to determine. Currently they specify FREESPACE=YES for 6 of their PSINDEXes, of which only one has a strictly ascending key and the rest start with a date. There used to be more, but after analysing the VSAM statistics, Neil reverted some to normal inserts and non-zero free space. This reduced the rate of CA splits, ie growth, and also the overall I/O. Neil reminded us that PSINDEX records can be relatively large, which makes the total index much larger than its non-HALDB version. It also means that fewer records are retrieved in one I/O, which is bad for sequential processing.
For Neil’s site, most of their indexes are accessed with a partial key, which in some cases will match hundreds of entries. Neil attempted to minimize the effects by making the VSAM buffer pools bigger, although he knew he wouldn’t be able to get the I/O rates back down to pre-HALDB levels. He reasoned that the buffers ought to hold the same number of VSAM data records as before, so he aimed to increase the number of buffers in line with the increase in the record length. When he calculated the new subpool sizes he discovered that some came out greater than the limit of 32,767 buffers, and in any case there wasn’t enough virtual storage available for such large increases, so he had to scale them all back anyway. But the I/O rates for most of the OSAM subpools are higher than for any of the VSAM ones, so this is maybe not the area with the most potential for I/O reduction.
To sum up, Neil informed the user group that HALDBs can change your life if you’re a DBA! Once you’ve converted your databases, that’s when the fun really begins
You might like to know that the Virtual IMS user group can be found at www.fundi.com/virtualims. The next meeting is on 9 October at 11:30am EDT, when the guest speaker will be Aurora Emanuela Dell’Anno, a Senior Engineering Services Architect with CA Technology, who’ll be talking about “IMS performance – taming the beast”. It will be well worth attending – and being a virtual meeting, there’s no need to leave your desk. See the Web site for more details.
Labels:
Aurora Emanuela Dell'Anno,
blog,
DBA,
Eddolls,
HALDB,
HALDBA,
IMS,
Neil Price,
user group,
Virtual
Sunday, 23 September 2012
IMS’s HALDBA
The Virtual IMS user group at www.fundi.com/virtualims has a regular virtual meetings every other month. In fact, the next one is on 9 October at 10:30 Central Daylight Time. Our guest speaker is Aurora Emanuela Dell'Anno, a Senior Engineering Services Architect with CA Technology, and she will be talking about “IMS performance – taming the beast”. It will be well worth attending – and being a virtual meeting, there’s no need to leave your desk. See the Web site for more details.
Back in February, Neil Price, a Senior DBA and Systems Programmer with TNT Express ICS gave an excellent presentation entitled, “Memoirs of a HALDBA”. Neil first encountered IBM mainframes as a student in 1971 and more recently he has been Chairman of the GSE UK IMS Working Group.
Neil started his presentation by giving the user group a clear idea of what hardware and software TNT Express use. This is relevant because it affects the decisions that Neil and his organization had to make. They have two machines – one hosting the main production LPAR; and the other running the network, testing, and one or two other things. This has a total disk capacity of about 18TB, of which just over 4TB are mirrored to their Disaster Recovery site, about 100 miles away in London.
Neil informed us that they always used OSAM because they believe it’s more efficient. More recently, of course, it gave them the ability to have datasets up to 8GB in size. Very few of their databases ever get disorganized enough to need a reorg unless they’re running out of space. The use of HDAM for all of the most volatile databases is a major factor in this. A few are reorganised weekly, 3 of them in order to remove logically-deleted data with a user exit, and a few quarterly.
In terms of HALDB, Neil told us that since they started investigating and testing HALDB in 2004, they’ve converted 18 HDAM databases, which were getting close to the 8GB OSAM dataset size limit, including the whole of the CCDB (Central Consignment Data Base, consisting of 15 HALDBs of 6 partitions each) and one with an index that was threatening to hit 4GB between weekly reorgs. Just 2 of their HALDBs are partitioned by key range. Most of the HALDBs got up to 16 partitions on model-3 logical volumes over time, but went down to 6 partitions when they moved them to model-9s. Apart from Consignment, each HALDB has at least one index. 5 have 28 indexes between them, and 13 of the indexes are also partitioned.
Neil was pleased to have avoided certain challenges at his site. For example, the DBRC question – they decided to have DBRC forced registration in every system from the very beginning, and, because there’s virtually no DL/I batch, there’s just one copy of each database and it’s permanently online. It didn’t take much thinking to decide that their 14 cloned databases would stay separate when they became HALDBs. Anything else would have involved major application code changes. The databases using a partitioning exit all use the Consignment key, so they’ve effectively had to write only one simple exit routine. They still maintain separate modules, though, just in case they need to vary the logic at any time.
To simplify everything they treat the HALDBs just the same as before except in very exceptional circumstances and haven’t changed their names. In particular, not taking individual partitions offline means that the applications don’t need to handle partitions being unavailable.
In terms of challenges they faced, Neil said that the first database they converted was one using the Consignment key, and it was fairly obvious that they wouldn’t get a predictable spread of data using key ranges. In the end they adapted the example in the IBM Redbook “The Complete IMS HALDB Guide” (SG24-6945) to select a partition based on a portion of the key, in their case the trailing characters of the 9-digit character numeric Consignment number. With equally-spaced “limits”, this has resulted in a very even spread of data across the partitions. They started with 2 digits, but once they went from 10 to 12 and then 16 partitions it seemed sensible to start using 3 digits for a slightly more even spread. Of course the overall physical sequence of records isn’t the same as in HDAM. For databases with key-range partitioning, they used the HALDB Migration Aid utility DFSMAID0 to choose the limits.
When it comes to monitoring challenges, Neil told the group that for non-HALDBs they get a list by e-mail of any thresholds that have been exceeded, for example the number of database records or the number of HDAM roots not stored in their “home” block. This is sent by a simple SAS program that processes a daily report of all metrics and threshold exceptions from their Pointer Checker statistics repository. The statistics are gathered as part of the daily online Image Copy backups. They find the e-mail preferable to having to navigate ISPF panels or a GUI to find out what exceptions have occurred, if any.
HALDB statistics, on the other hand, are written to a completely different repository, which can be viewed through a GUI but didn’t come with facilities to generate reports or e-mails. So they segregated the HALDBs into their own backup jobs, which direct the Pointer Checker report to a file. An additional SAS step then processes the file, writes the results to a SAS database and a summary report, and generates an e-mail if the percentage of usable free space in any partition falls below a hard-coded threshold.
Find out more about Neil Price’s experience next week.
Back in February, Neil Price, a Senior DBA and Systems Programmer with TNT Express ICS gave an excellent presentation entitled, “Memoirs of a HALDBA”. Neil first encountered IBM mainframes as a student in 1971 and more recently he has been Chairman of the GSE UK IMS Working Group.
Neil started his presentation by giving the user group a clear idea of what hardware and software TNT Express use. This is relevant because it affects the decisions that Neil and his organization had to make. They have two machines – one hosting the main production LPAR; and the other running the network, testing, and one or two other things. This has a total disk capacity of about 18TB, of which just over 4TB are mirrored to their Disaster Recovery site, about 100 miles away in London.
Neil informed us that they always used OSAM because they believe it’s more efficient. More recently, of course, it gave them the ability to have datasets up to 8GB in size. Very few of their databases ever get disorganized enough to need a reorg unless they’re running out of space. The use of HDAM for all of the most volatile databases is a major factor in this. A few are reorganised weekly, 3 of them in order to remove logically-deleted data with a user exit, and a few quarterly.
In terms of HALDB, Neil told us that since they started investigating and testing HALDB in 2004, they’ve converted 18 HDAM databases, which were getting close to the 8GB OSAM dataset size limit, including the whole of the CCDB (Central Consignment Data Base, consisting of 15 HALDBs of 6 partitions each) and one with an index that was threatening to hit 4GB between weekly reorgs. Just 2 of their HALDBs are partitioned by key range. Most of the HALDBs got up to 16 partitions on model-3 logical volumes over time, but went down to 6 partitions when they moved them to model-9s. Apart from Consignment, each HALDB has at least one index. 5 have 28 indexes between them, and 13 of the indexes are also partitioned.
Neil was pleased to have avoided certain challenges at his site. For example, the DBRC question – they decided to have DBRC forced registration in every system from the very beginning, and, because there’s virtually no DL/I batch, there’s just one copy of each database and it’s permanently online. It didn’t take much thinking to decide that their 14 cloned databases would stay separate when they became HALDBs. Anything else would have involved major application code changes. The databases using a partitioning exit all use the Consignment key, so they’ve effectively had to write only one simple exit routine. They still maintain separate modules, though, just in case they need to vary the logic at any time.
To simplify everything they treat the HALDBs just the same as before except in very exceptional circumstances and haven’t changed their names. In particular, not taking individual partitions offline means that the applications don’t need to handle partitions being unavailable.
In terms of challenges they faced, Neil said that the first database they converted was one using the Consignment key, and it was fairly obvious that they wouldn’t get a predictable spread of data using key ranges. In the end they adapted the example in the IBM Redbook “The Complete IMS HALDB Guide” (SG24-6945) to select a partition based on a portion of the key, in their case the trailing characters of the 9-digit character numeric Consignment number. With equally-spaced “limits”, this has resulted in a very even spread of data across the partitions. They started with 2 digits, but once they went from 10 to 12 and then 16 partitions it seemed sensible to start using 3 digits for a slightly more even spread. Of course the overall physical sequence of records isn’t the same as in HDAM. For databases with key-range partitioning, they used the HALDB Migration Aid utility DFSMAID0 to choose the limits.
When it comes to monitoring challenges, Neil told the group that for non-HALDBs they get a list by e-mail of any thresholds that have been exceeded, for example the number of database records or the number of HDAM roots not stored in their “home” block. This is sent by a simple SAS program that processes a daily report of all metrics and threshold exceptions from their Pointer Checker statistics repository. The statistics are gathered as part of the daily online Image Copy backups. They find the e-mail preferable to having to navigate ISPF panels or a GUI to find out what exceptions have occurred, if any.
HALDB statistics, on the other hand, are written to a completely different repository, which can be viewed through a GUI but didn’t come with facilities to generate reports or e-mails. So they segregated the HALDBs into their own backup jobs, which direct the Pointer Checker report to a file. An additional SAS step then processes the file, writes the results to a SAS database and a summary report, and generates an e-mail if the percentage of usable free space in any partition falls below a hard-coded threshold.
Find out more about Neil Price’s experience next week.
Labels:
Aurora Emanuela Dell'Anno,
blog,
Eddolls,
HALDBA,
HDAM,
IMS,
Neil Price,
TNT Express ICS,
user group,
Virtual
Sunday, 16 September 2012
CICS communication history
In the beginning, CICS programmers used conversational transactions – but not any more. Pseudo-conversational coding is less resource-intensive, user think time costs are eliminated, and sites can run more simultaneous transactions. On the down side, programming is more difficult, application design requires re-starting programs, it’s more difficult to debug programs, and programs require an area to store data between tasks.
And so we get the Communication Area (COMMAREA). It’s a terminal-based scratchpad that’s dynamic in size (0 to 32,500 bytes), and is private and chained to the terminal. The COMAREA can hold data between pseudo-conversations, has a maximum in size of 32K, can be passed from program to program, can be empty, partially full, or full, and is automatically cleaned up when not used.
However, COMMAREAs have obvious limitations, and so IBM gave us channels and containers. There are no size limitations on data in containers, the number of containers in a channel is unlimited, and there can be separate containers for input and output.
Another way round COMMAREA limitations is to use an XML (eXtensible Mark-up Language) front-end application. Associated with this are:
With the introduction of Services Oriented Architecture (SOA), programmers now use CICS Web services. A Web service is a collection of operations that are network accessible through standardized XML messaging. The Web services architecture is based on interactions between three components: a service provider, which is the platform that hosts access to the service; a service requester, which is the application that is looking for and invoking or initiating an interaction with a service; and service registry, which is a place where service providers publish their service descriptions, and where service requesters find them.
Universal Description, Discovery and Integration (UDDI) is a specification for distributed Web-based information registries of Web services. The service provider “owns” the service, creates the WSDL (Web Service Description Language – see below), publishes the WSDL, and processes the requests. The service requester “finds” the service, binds to the service, and invokes the service using the WSDL. The service registry hosts the service description and is optional for statically bound requesters.
WSDL is an XML application for describing Web services. WSDL comprises:
Simple Object Access Protocol (SOAP) is an XML-based protocol for the exchange of information in a distributed environment. A SOAP message is encoded as an XML document.
The CICS Web services assistant is a set of batch utilities that can help users transform existing CICS applications into Web services and to enable CICS applications to use Web services provided by external providers. The CICS Web services assistant comprises two utility programs:
You can use Rational Developer for System z for Web services and XML development. A CICS pipeline is responsible for dealing with SOAP headers. It’s implemented as a series of programs, and can be configured by end users by using message handlers. A message handler is a program in which you can perform your own processing of Web service requests and responses.
And so we get the Communication Area (COMMAREA). It’s a terminal-based scratchpad that’s dynamic in size (0 to 32,500 bytes), and is private and chained to the terminal. The COMAREA can hold data between pseudo-conversations, has a maximum in size of 32K, can be passed from program to program, can be empty, partially full, or full, and is automatically cleaned up when not used.
However, COMMAREAs have obvious limitations, and so IBM gave us channels and containers. There are no size limitations on data in containers, the number of containers in a channel is unlimited, and there can be separate containers for input and output.
Another way round COMMAREA limitations is to use an XML (eXtensible Mark-up Language) front-end application. Associated with this are:
- Extensible Stylesheet Language Transformation (XSLT) is the recommended style sheet language of XML.
- XML namespaces – in XML, element names are defined by a developer.
- XML schema is an XML document that describes the structure and constrains the contents of other XML documents.
- XML parser is a program that is invoked by an application to process an XML document, ensure that it meets all the rules of XML as well as the syntax of the DTD or schema, making the data available to the calling application.
With the introduction of Services Oriented Architecture (SOA), programmers now use CICS Web services. A Web service is a collection of operations that are network accessible through standardized XML messaging. The Web services architecture is based on interactions between three components: a service provider, which is the platform that hosts access to the service; a service requester, which is the application that is looking for and invoking or initiating an interaction with a service; and service registry, which is a place where service providers publish their service descriptions, and where service requesters find them.
Universal Description, Discovery and Integration (UDDI) is a specification for distributed Web-based information registries of Web services. The service provider “owns” the service, creates the WSDL (Web Service Description Language – see below), publishes the WSDL, and processes the requests. The service requester “finds” the service, binds to the service, and invokes the service using the WSDL. The service registry hosts the service description and is optional for statically bound requesters.
WSDL is an XML application for describing Web services. WSDL comprises:
- Types – the data types in the form of XML schemas
- Message – an abstract definition of the data in the form of a message
- PortType – an abstract set of operations mapped to one or more end points
- Binding – the concrete protocol and data formats for the operations
- Service – a collection of related end points.
Simple Object Access Protocol (SOAP) is an XML-based protocol for the exchange of information in a distributed environment. A SOAP message is encoded as an XML document.
The CICS Web services assistant is a set of batch utilities that can help users transform existing CICS applications into Web services and to enable CICS applications to use Web services provided by external providers. The CICS Web services assistant comprises two utility programs:
- DFHLS2WS generates a Web service binding file from a language structure. This utility also generates a Web service description.
- DFHWS2LS generates a Web service binding file from a Web service description. This utility also generates a language structure that you can use in your application programs.
You can use Rational Developer for System z for Web services and XML development. A CICS pipeline is responsible for dealing with SOAP headers. It’s implemented as a series of programs, and can be configured by end users by using message handlers. A message handler is a program in which you can perform your own processing of Web service requests and responses.
Labels:
blog,
CICS,
COMMAREA,
communication,
conversational,
DFHLS2WS,
DFHWS2LS,
Eddolls,
pseudo-conversational,
Rational Developer,
SOA,
SOAP,
transactions,
UDDI,
WSDL,
XML,
XSLT
Sunday, 9 September 2012
Back-ups, archives, and Pogoplug
Storage is always an issue. Whether you’ve got petabytes of it on your mainframe or just 16GB on your smartphone or tablet, the issues are always the same. You want speedy access to the data you’re using, and you want somewhere to store the data you’re not using at the moment – but might want to use it later.
Speeding access to data can be helped in a number of ways – storing the data contiguously, using indexes to speed up access to individual fields, using shorter paths from the storage medium to the processor, keeping recently used data in cache storage in the likelihood it will be used again. Human ingenuity has come up with lots of workarounds to ensure that you get the data as quickly as possible.
But there’s more of an issue with archived data. I could have immediate access to it, but then it would cost as much to store as live data. I could store it on some cheap medium and find it takes a week to get it back to a state that I can use. And something along those lines is used by many archiving organizations. Those tapes are stored inside a mountain somewhere, and the data can be restored – but not quickly.
Depending on the kind of business you’re in, someone has made a decision about costs and storage retrieval speed. If your company is in banking or insurance, that data might get back much faster than if your organization doesn’t promise its customers fast access to old data.
The advent of cloud computing added another layer of complexity to storage. You could store your data on the cloud – which, in psychology terms, is called a nominalization. You call it something, which seems to have meaning, but isn’t a concrete real thing – you can’t put it in a wheel barrow. You hear politicians use nominalizations all the time. I’ll vote for prosperity and world peace – but what does that actually mean? Prosperity for someone from a third world country might seem quite close to poverty to me! And world peace can always be achieved by having a tank on everyone’s lawn. Your idea of what is meant by a nominalization may be completely different from the idea held by the next person!
Anyway, I digress. So nowadays, you can store your data on the cloud. You don’t know where it’s physically stored you just know how to access it. And I assume much the same models are used for the price of cloud storage – fast access is dearer than slow access. Amazon came up with, what they call, Amazon Glacier. This is a cheap service for archived or backed up data that you don’t access very often. In line with that pricing model, retrieval can take several hours. Amazon Glacier only charges organizations for the storage space they use.
I’ve been using a Pogoplug for over two years and I’ve mentioned it in this blog a few times. A Pogoplug started life as a small device that plugged into your router and allowed you to plug in memory sticks that you could access from a browser anywhere. The company has recently expanded its cloud offerings and has done a deal with Amazon to offer cloud storage at a very competitive rate. The solution isn’t for mainframers, but makes sense for individuals and small companies.
The Pogoplug Web site at http://pogoplug.com gives all the price plans they’re offering. So, a small organization would be able to back up files from their computers, smartphones, and tablets to one secure location. All the company has to do is buy a Pogoplug device or download the Pogoplug Server software and run it on a Windows or Mac. They can then back-up and archive their files – and we’re talking terabytes of space. Users can continuously copy all or part of their data to Pogoplug’s secure offsite cloud. Any changes they make to their original folder will be synchronized automatically.
It seems like a useful addition to the all-important back-up and restore choices a company has to make.
Speeding access to data can be helped in a number of ways – storing the data contiguously, using indexes to speed up access to individual fields, using shorter paths from the storage medium to the processor, keeping recently used data in cache storage in the likelihood it will be used again. Human ingenuity has come up with lots of workarounds to ensure that you get the data as quickly as possible.
But there’s more of an issue with archived data. I could have immediate access to it, but then it would cost as much to store as live data. I could store it on some cheap medium and find it takes a week to get it back to a state that I can use. And something along those lines is used by many archiving organizations. Those tapes are stored inside a mountain somewhere, and the data can be restored – but not quickly.
Depending on the kind of business you’re in, someone has made a decision about costs and storage retrieval speed. If your company is in banking or insurance, that data might get back much faster than if your organization doesn’t promise its customers fast access to old data.
The advent of cloud computing added another layer of complexity to storage. You could store your data on the cloud – which, in psychology terms, is called a nominalization. You call it something, which seems to have meaning, but isn’t a concrete real thing – you can’t put it in a wheel barrow. You hear politicians use nominalizations all the time. I’ll vote for prosperity and world peace – but what does that actually mean? Prosperity for someone from a third world country might seem quite close to poverty to me! And world peace can always be achieved by having a tank on everyone’s lawn. Your idea of what is meant by a nominalization may be completely different from the idea held by the next person!
Anyway, I digress. So nowadays, you can store your data on the cloud. You don’t know where it’s physically stored you just know how to access it. And I assume much the same models are used for the price of cloud storage – fast access is dearer than slow access. Amazon came up with, what they call, Amazon Glacier. This is a cheap service for archived or backed up data that you don’t access very often. In line with that pricing model, retrieval can take several hours. Amazon Glacier only charges organizations for the storage space they use.
I’ve been using a Pogoplug for over two years and I’ve mentioned it in this blog a few times. A Pogoplug started life as a small device that plugged into your router and allowed you to plug in memory sticks that you could access from a browser anywhere. The company has recently expanded its cloud offerings and has done a deal with Amazon to offer cloud storage at a very competitive rate. The solution isn’t for mainframers, but makes sense for individuals and small companies.
The Pogoplug Web site at http://pogoplug.com gives all the price plans they’re offering. So, a small organization would be able to back up files from their computers, smartphones, and tablets to one secure location. All the company has to do is buy a Pogoplug device or download the Pogoplug Server software and run it on a Windows or Mac. They can then back-up and archive their files – and we’re talking terabytes of space. Users can continuously copy all or part of their data to Pogoplug’s secure offsite cloud. Any changes they make to their original folder will be synchronized automatically.
It seems like a useful addition to the all-important back-up and restore choices a company has to make.
Sunday, 2 September 2012
IBM's big baby
IBM has traditionally announced powerful mainframes in alternative years to more affordable models. Last year saw the z114, so this year we were expecting IBM to deliver something big – and that’s what it’s done with the zEnterprise EC (Enterprise Class) 12.
The zEC12 has a hexa-core design that runs at an eye-watering 5.5GHz and is pretty unique in supporting transactional execution. That means it treats system resources in much the same way as a transactional database system and so eliminates the overhead of software locking systems. You can expect to hear more about this Transaction Execution Facility. The z12 chip is implemented in a 32 nanometer high-K metal gate process – its predecessor (z11) used a 45 nanometer process.
The top end machines can take 120 cores, and the processors include an Enhanced DAT2 facility, which allows languages to exploit 2GB page frames. The processors also have a decimal-floating-point zoned conversion facility that will be supported by the next release of PL/I.
The zEC12 also supports Flash Express memory, which is used by systems that anticipate bursts of activity. And IBM claims this is one of the most secure enterprise systems ever with its Common Criteria Evaluation Assurance Level 5+ security classification. It even has a tamper-resistant cryptographic co-processor, which is called the Crypto Express 4s.
IBM said the new mainframe was the product of over $1bn in research and development investment and work was carried out in 18 IBM labs around the world and with IBM’s clients.
And as well as being the mainframe for the cloud computing generation, it also provides hybrid computing support. So not only can it run z/OS, users can consolidate their Linux workloads with Linux on System z (using SUSE Enterprise Server or Red Hat Enterprise Linux), and run other z/ operating systems – z/VM, z/VSE, and z/TPF. And the IBM zEnterprise BladeCenter Extension (zBX) lets users combine workloads designed for mainframes with those for POWER7 and x86 chips, Microsoft Windows Server.
The zEC12 increases the performance of analytic workloads by 30 percent compared to its predecessor. It seems that support for the DB2 Analytics Accelerator enables clients to run complex business analytics and operational analytics on the same platform. There are also IT systems analytics capabilities. It analyses internal system messages to provide a near real-time view of the system’s health, including any potential problems. Called IBM zAware, the technology learns from the messages to recognize patterns and quickly pinpoint any deviations, using the information to identify unusual system behaviour and minimize its impact.
Unusually, sites will be able to run the zEC12 without a raised data centre floor using new overhead power and cabling support. And for ‘green’ sites, zEC12 can deliver 25 percent more performance and 50 percent more capacity with the same energy footprint as its predecessor.
In a statement of direction, IBM said the zEC12 will be the last high-end System z server to offer support for zAAP specialty engine processors. Although IBM will continue to support running zAAP workloads on zIIP processors (zAAP on zIIP). This is intended to help simplify capacity planning and performance management, while still supporting all the currently eligible workloads.
The new processors should be available from about the end of September. It looks like a very powerful piece of kit with a number of interesting features that should keep mainframe users happy for a year or two.
The zEC12 has a hexa-core design that runs at an eye-watering 5.5GHz and is pretty unique in supporting transactional execution. That means it treats system resources in much the same way as a transactional database system and so eliminates the overhead of software locking systems. You can expect to hear more about this Transaction Execution Facility. The z12 chip is implemented in a 32 nanometer high-K metal gate process – its predecessor (z11) used a 45 nanometer process.
The top end machines can take 120 cores, and the processors include an Enhanced DAT2 facility, which allows languages to exploit 2GB page frames. The processors also have a decimal-floating-point zoned conversion facility that will be supported by the next release of PL/I.
The zEC12 also supports Flash Express memory, which is used by systems that anticipate bursts of activity. And IBM claims this is one of the most secure enterprise systems ever with its Common Criteria Evaluation Assurance Level 5+ security classification. It even has a tamper-resistant cryptographic co-processor, which is called the Crypto Express 4s.
IBM said the new mainframe was the product of over $1bn in research and development investment and work was carried out in 18 IBM labs around the world and with IBM’s clients.
And as well as being the mainframe for the cloud computing generation, it also provides hybrid computing support. So not only can it run z/OS, users can consolidate their Linux workloads with Linux on System z (using SUSE Enterprise Server or Red Hat Enterprise Linux), and run other z/ operating systems – z/VM, z/VSE, and z/TPF. And the IBM zEnterprise BladeCenter Extension (zBX) lets users combine workloads designed for mainframes with those for POWER7 and x86 chips, Microsoft Windows Server.
The zEC12 increases the performance of analytic workloads by 30 percent compared to its predecessor. It seems that support for the DB2 Analytics Accelerator enables clients to run complex business analytics and operational analytics on the same platform. There are also IT systems analytics capabilities. It analyses internal system messages to provide a near real-time view of the system’s health, including any potential problems. Called IBM zAware, the technology learns from the messages to recognize patterns and quickly pinpoint any deviations, using the information to identify unusual system behaviour and minimize its impact.
Unusually, sites will be able to run the zEC12 without a raised data centre floor using new overhead power and cabling support. And for ‘green’ sites, zEC12 can deliver 25 percent more performance and 50 percent more capacity with the same energy footprint as its predecessor.
In a statement of direction, IBM said the zEC12 will be the last high-end System z server to offer support for zAAP specialty engine processors. Although IBM will continue to support running zAAP workloads on zIIP processors (zAAP on zIIP). This is intended to help simplify capacity planning and performance management, while still supporting all the currently eligible workloads.
The new processors should be available from about the end of September. It looks like a very powerful piece of kit with a number of interesting features that should keep mainframe users happy for a year or two.
Labels:
12,
blog,
EC,
Eddolls,
Flash Express memory,
IBM,
mainframe,
Transaction Execution Facility,
zAware,
zEC12,
zEnterprise
Sunday, 26 August 2012
What can I say?
I’ve reviewed software that allows you to talk to your computer before. In fact I talked about Dragon NaturallySpeaking Version 9 in this blog back in January 2007. By then I’d gone from completely unimpressed to saying it’s worth a look. But I’ve just got my hands on Dragon NaturallySpeaking Version 12 from Nuance and I am very impressed!
I had to do a little bit of training in order for the software to recognize my voice and create my own personal profile – and that went fairly smoothly. But what was most impressive was the fact that it actually wrote what I said! In the past, I’ve played a game with my children where I’d say a few sentences, then they’d read what the software thought I’d said, and we’d repeat the process until gales of laughter overtook us reading the strange interpretations of our speech appearing on screen. But now my children are grown up, and so is this software. I found the accuracy of the product very good. I didn’t need to separate each word as I spoke – in fact, it recommended that I spoke in phrases. And it pretty much wrote on screen what I was saying.
The hard part for me was the controlling commands because I was unfamiliar with them, and to begin with I found myself wanting to just get my hands on the keyboard and make the correction quickly myself. To be fair, the people at Nuance understand this user frustration and they have put the Dragon Sidebar onscreen the whole time the software is in use. That makes it very easy to see the commands I need to use – for example saying: “delete ‘whatever’”, or “go to end of line”. I found that I was quite quickly remembering the commands that I used regularly and not needing to look them up. In addition, the software comes up with suggestions for what you might have said if you’re correcting and you can easily choose one of those alternatives. You simply say “correct ‘whatever’”, and then say “choose one” (or two or three – whichever is the better alternative).
As well as using the software with Word, it’s very easy to use it with Gmail, Hotmail, Facebook, and Twitter. You simply say: “post to Facebook” or “post to Twitter”. You can launch Word by saying: “Open Microsoft Word”. You can italicize, embolden, capitalize, insert lists, etc etc.
My version of the software came with a headset and microphone that I plugged into my laptop and which I used for most of this review. You can also download the Dragon Remote Microphone app for your phone. I used the Android version and there is a iPhone version. From the Profile menu, I selected “Add dictation source to current User Profile”. I selected “Dragon Remote Mic”. The laptop software then created a QR code that the software on my phone could scan. After that, I was connected. The only thing I needed to do was a little bit more voice recognition training. Once the training’s done, I can wander round my office talking to my phone and letting the words appear on my computer screen. As long as we’re on the same wifi network, it all works.
It’s very easy to tell the microphone to start listening and to stop – so that it doesn’t try to write down the whole of a telephone conversation that interrupts a session!
My conclusion this time is that the software is very easy to use and very accurate. It would make life very easy for someone who had problems using a keyboard and would not lead to the frustration experienced with much older voice recognition products. For able-bodied people, I think it makes a useful alternative input source, and the ability to walk around and talk – and then correct any errors later – makes it really useful for people, like me, who blog and write articles and often need to get a whole lot of ideas out of our heads onto paper (the screen) quickly.
Version 12 of Nuance’s Dragon NaturallySpeaking is definitely worth a 9 out of 10 score. And definitely worth seeing whether there’s a place for it in your organization or your home.
I had to do a little bit of training in order for the software to recognize my voice and create my own personal profile – and that went fairly smoothly. But what was most impressive was the fact that it actually wrote what I said! In the past, I’ve played a game with my children where I’d say a few sentences, then they’d read what the software thought I’d said, and we’d repeat the process until gales of laughter overtook us reading the strange interpretations of our speech appearing on screen. But now my children are grown up, and so is this software. I found the accuracy of the product very good. I didn’t need to separate each word as I spoke – in fact, it recommended that I spoke in phrases. And it pretty much wrote on screen what I was saying.
The hard part for me was the controlling commands because I was unfamiliar with them, and to begin with I found myself wanting to just get my hands on the keyboard and make the correction quickly myself. To be fair, the people at Nuance understand this user frustration and they have put the Dragon Sidebar onscreen the whole time the software is in use. That makes it very easy to see the commands I need to use – for example saying: “delete ‘whatever’”, or “go to end of line”. I found that I was quite quickly remembering the commands that I used regularly and not needing to look them up. In addition, the software comes up with suggestions for what you might have said if you’re correcting and you can easily choose one of those alternatives. You simply say “correct ‘whatever’”, and then say “choose one” (or two or three – whichever is the better alternative).
As well as using the software with Word, it’s very easy to use it with Gmail, Hotmail, Facebook, and Twitter. You simply say: “post to Facebook” or “post to Twitter”. You can launch Word by saying: “Open Microsoft Word”. You can italicize, embolden, capitalize, insert lists, etc etc.
My version of the software came with a headset and microphone that I plugged into my laptop and which I used for most of this review. You can also download the Dragon Remote Microphone app for your phone. I used the Android version and there is a iPhone version. From the Profile menu, I selected “Add dictation source to current User Profile”. I selected “Dragon Remote Mic”. The laptop software then created a QR code that the software on my phone could scan. After that, I was connected. The only thing I needed to do was a little bit more voice recognition training. Once the training’s done, I can wander round my office talking to my phone and letting the words appear on my computer screen. As long as we’re on the same wifi network, it all works.
It’s very easy to tell the microphone to start listening and to stop – so that it doesn’t try to write down the whole of a telephone conversation that interrupts a session!
My conclusion this time is that the software is very easy to use and very accurate. It would make life very easy for someone who had problems using a keyboard and would not lead to the frustration experienced with much older voice recognition products. For able-bodied people, I think it makes a useful alternative input source, and the ability to walk around and talk – and then correct any errors later – makes it really useful for people, like me, who blog and write articles and often need to get a whole lot of ideas out of our heads onto paper (the screen) quickly.
Version 12 of Nuance’s Dragon NaturallySpeaking is definitely worth a 9 out of 10 score. And definitely worth seeing whether there’s a place for it in your organization or your home.
Labels:
blog,
Dragon NaturallySpeaking,
Eddolls,
Nuance,
Version 12
Saturday, 18 August 2012
Why is everyone talking about Hadoop?
Hadoop is an Apache project, which means it’s open source software, and it’s written in Java. What it does is support data-intensive distributed applications. It comes from work Google were doing and allows applications to use thousands of independent computers and petabytes of data.
Yahoo has been a big contributor to the project. The Yahoo Search Webmap is a Hadoop application that is used in every Yahoo search. Facebook claims to have the largest Hadoop cluster in the world. Other users include Amazon, eBay, LinkedIn, and Twitter. But now, there’s talk of IBM taking more than a passing interest.
According to IBM: “Apache Hadoop has two main subprojects:
Yahoo has been a big contributor to the project. The Yahoo Search Webmap is a Hadoop application that is used in every Yahoo search. Facebook claims to have the largest Hadoop cluster in the world. Other users include Amazon, eBay, LinkedIn, and Twitter. But now, there’s talk of IBM taking more than a passing interest.
According to IBM: “Apache Hadoop has two main subprojects:
- MapReduce – The framework that understands and assigns work to the nodes in a cluster.
- HDFS – A file system that spans all the nodes in a Hadoop cluster for data storage. It links together the file systems on many local nodes to make them into one big file system. HDFS assumes nodes will fail, so it achieves reliability by replicating data across multiple nodes.”
It goes on to say: “Hadoop changes the economics and the dynamics of large-scale computing. Its impact can be boiled down to four salient characteristics. Hadoop enables a computing solution that is:
- Scalable – New nodes can be added as needed, and added without needing to change data formats, how data is loaded, how jobs are written, or the applications on top.
- Cost effective – Hadoop brings massively parallel computing to commodity servers. The result is a sizeable decrease in the cost per terabyte of storage, which in turn makes it affordable to model all your data.
- Flexible – Hadoop is schema-less, and can absorb any type of data, structured or not, from any number of sources. Data from multiple sources can be joined and aggregated in arbitrary ways enabling deeper analyses than any one system can provide.
- Fault tolerant – When you lose a node, the system redirects work to another location of the data and continues processing without missing a beat.”
According to Alan Radding writing in IBM Systems Magazine (http://www.ibmsystemsmag.com/mainframe/trends/whatsnew/hadoop_mainframe/) IBM “is taking a federated approach to the big data challenge by blending traditional data management technologies with what it sees as complementary new technologies, like Hadoop, that address speed and flexibility, and are ideal for data exploration, discovery and unstructured analysis.”
Hadoop could run on any mainframe already running Java or Linux. Radding lists tools to make life easier like:
- SQOOP – imports data from relational databases into Hadoop.
- Hive – enables data to be queried using an SQL-like language called HiveQL.
- Apache Pig – a high-level platform for creating the MapReduce programs used with Hadoop.
There’s also ZooKeeper, which provides a centralized infrastructure and services that enable synchronization across a cluster.
Harry Battan, data serving manager for System z, suggests that 2,000 instances of Hadoop could run on Linux on the System z, which would make a fairly large Hadoop configuration.
Hadoop still needs to be certified for mainframe use, but sites with newer hybrid machines (z114 or z196) could have Hadoop today by putting it on their x86 blades, for which Hadoop is already certified, and it could then process data from DB2 on the mainframe. But you can see why customers might be looking to get it on their mainframes because it gives them a way to get more information out of the masses of data they already possess. And data analysis is often seen as the key to continuing business success for larger organizations.
Labels:
Alan Radding,
Apache,
Apache Pig,
blog,
Eddolls,
Hadoop,
Harry Battan,
HDFS,
Hive,
IBM,
IBM Systems Magazine,
Java,
mainframe,
MapReduce,
SQOOP,
ZooKeeper
Sunday, 12 August 2012
IBM and RIM
There was a time when getting out your BlackBerry was synonymous with being a cool young executive. You can remember people who had phones that beeped every time they received an e-mail, and they would act like they were the Fonz. But then a different fruit became king of the hill – Apple. You weren’t anyone without an iPhone. And now it’s probably Android for the really cool kids because you can control it – without needing to jail break it!
BlackBerry had a second wind. Lots of youngsters used BlackBerries because of the messaging facility. They could chat to their friends – using BBM – for free.
Research In Motion (RIM) – the company that makes BlackBerry phones – is based in Waterloo, Ontario, Canada. In January this year, Thorsten Heins took charge as CEO. He must be wondering what he can do to re-invigorate the firm. But there’s more to RIM than just the BlackBerry phone. There’s meant to be a new BlackBerry 10 operating system coming out next year, but, perhaps more importantly, RIM has a BlackBerry Enterprise Services (BES) unit, which operates a network of secure servers used to support BlackBerry devices.
The rumour mill suggests that this is what IBM has its eye on. And this is the reason that RIM stock jumped up by 9 percent. Although, at this stage, it is only a rumour. Both companies are saying the usual thing about not commenting on rumours – instead of looking blankly at the questioner and going, “what!!”.
Were IBM to buy the whole of RIM, that would be a very bold decision. As I said, the BlackBerry 10 operating system doesn’t come out until early next year. It might be possible to licence the OS to third parties, or they could sell off RIM’s Network Operations Centre (NOC). The NOC transmits all BlackBerry data for both enterprise and consumers. If IBM were to keep the handset division, they would need to encourage developers to write apps for them. At the moment there are apps for iPhones, Android, and, when the new Surface tablet comes out with Windows 8, there’s likely to be a lot of development enthusiasm there. But I’m not sure there’s any real excitement to develop apps for the BlackBerry.
So, it’s more likely that IBM has its sights set on the enterprise services unit, which has pretty good security software that it uses to give IT departments control over corporate information. The encryption algorithms its uses make it very difficult for anyone to intercept e-mails or instant messages (BBM). This makes it very popular with people in banking and other financial services. According to RIM, there are 250,000 BlackBerry Enterprise Services (BES) servers installed worldwide.
However, if IBM doesn’t buy the NOC part of the business, they’ll need to come to some sort of working agreement with RIM (or whoever owns that part) over who has control over IBM’s customers’ traffic using it.
From a BlackBerry customer perspective, knowing IBM was looking after enterprise services would seem like a good thing. From IBM’s perspective, they would get access to a way of transmitting secure data from a platform that they don’t currently include in their portfolio – mobile computing. And IBM could add the software into (probably) WebSphere.
But whether IBM should do it or not depends on how much it’s going to cost them. RIM may well claim that they are making pots of money from the fees they charge mobile carriers for subscriber access to their network. They may say that the new operating system will see a resurgence of their popularity. If that were the case, then I would advise IBM to walk away now. If, on the other hand, IBM can get access to the security algorithms and the servers in use at financial institutions for a reasonable sum, then why not? Or if IBM thinks Oracle might be interested – then perhaps they should snap it up.
But, who knows? After all, it’s only a rumour!
Monday, 6 August 2012
Keeping it short
I was looking at my e-mail signature this week and thinking about what it needs to say. I then discovered that some of the hyperlinks in that signature were incredibly long – much longer than they needed to be. And I thought that perhaps other people might benefit from similarly shorter hyperlinks in their signatures.
So let’s have look at my old signature:
So let’s have look at my old signature:
Trevor Eddolls CEO iTech-Ed Ltd
IBM Champion
P. 01249 443256 | M. 07901 505 609 | E-mail | Web site |
Blog | Twitter | Facebook | LinkedIn | Arcati Yearbook | Virtual IMS user group | Virtual CICS user group
The top line stays the same – it’s good to tell people who you are, your job title, and the name of the company in your signature!
I wanted to keep “IBM Champion” – I’ve been an IBM Champion since 2009.
And the phone numbers haven’t changed since 2004, although the devices I use have. I disconnected the fax a few years ago and finally threw it out last year!
It was that bottom list that caused the issues. My company has a (fan) page on Facebook, and that originally had a URL like http://www.facebook.com/pages/iTech-Ed-Ltd/201736748040. Since then, we’ve set up a username for it, so the address I wanted to publish was fb.com/itech-ed – which is obviously so much shorter. If you haven’t got a short name for your business page, you enter fb.com/username on the address line of your browser. Provided you have more than 25 likes on your page, Facebook will tell you whether you can have a short name for the page and whether your choice is available. The rule is that you only get one chance – so make sure you choose a good, memorable, and appropriate name.
I also wanted to add my Google plus address. The URL for that is https://plus.google.com/u/0/108580724051942905828/posts. You can get a much shorter name by going to http://gplus.to/. I got a short name from there. So, now, my Google plus address is http://gplus.to/teddolls.
With LinkedIn, I managed to change the address for my personal profile from http://www.linkedin.com/profile/view?id=3544583 to http://www.linkedin.com/in/teddolls, which makes more sense, but then requires a second click to get to the profile page. You can do much the same.
The Web site addresses I couldn’t really shorten without them losing a sense of where the link would end up. Just using bitly or tiny url wouldn’t have provided any sense of security to an e-mail recipient about where clicking on that link would actually end up.
What I decided to do with the look of my signature was to group social media on one line and other Web sites on a second line. This prevents that line of links sbeing too long itself!
The easiest way to actually create the signature in the first place is to use Word. You can write the text and format it. The clever bit is to take a word – like Facebook – and put in the link. You simply select the word, press ctrl and k at the same time, and put the information you want in the pop-up box (see below).
Select “Existing File or Web Page” from the “Link to:” column on the left side of the box. Across the bottom of the box, it asks for the Address. This is the URL you want your word to hyperlink to. There’s one other clever thing you can do here. Where it says ScreenTip…”, you can enters some information that you want to appear when a person mouses over that part of your signature. So, for example, you might put “Find us on Facebook”, or “Follow us on Twitter” if it had been a link to your Twitter account. Click “OK” to save your changes – obviously! You can do that with each part of your signature. The ScreenTip acts in much the same way as adding a title tag to your html link.
So, not much of a change superficially, but some of those links are now shorter. Which means that my new-look corporate signature looks like:
Trevor Eddolls CEO iTech-Ed Ltd
IBM Champion
P. 01249 443256 | M. 07901 505 609 | E-mail | Web site |
Blog | Twitter | Facebook | LinkedIn | G+ |
Arcati Yearbook | Virtual IMS user group | Virtual CICS user group
You may not be too bothered about the appearance of my signature, but you might like to shorten some of the URLs hidden away in your signatures too.
Monday, 30 July 2012
Whatever happened to IPv6?
IPv4 is pretty much everywhere. It uses 32-bit addresses, which means (and you can check the maths on this) that there is a finite number of IP addresses that people can use – 4,294,967,296 addresses, in fact. Although the number of available addresses seems pretty large, you only have to look round at how online everything is to recognize that those addresses will run out. In fact, they already have – although techniques such as Network Address Translation (NAT), classful network design, and Classless Inter-Domain Routing have helped to keep everything working.
Back in the 1990s people (including me) were writing about the address limitation of IPv4, and the proposed solution was IPv6, which, after a very long gestation period, became commercially available in 2006. World IPv6 Launch day was as recently as 6 June 2012. The big selling point of IPv6 is that it uses 128-bit addressing, which results in far more addresses being available for people to use – 340,282,366,920,938,463,463,374,607,431,768,211,456 apparently.
An IPv4 address looks like: 192.169.1.62. An IPv6 address looks like: 2001:0db8:85a3:0042:0000:8a2e:0370:7334. And that’s the problem that many organizations face. How to they convert from IPv4 to IPv6? How do they translate their addresses? How do they test these new addresses when they need to maintain a working business environment?
William Data Systems (WDS) has come up with a new module for its ZEN system called the ZEN Application Gateway – or ZAG. WDS has found that many of its customers are currently facing the costs and risks of changes to applications, networks, and hardware. ZAG helps implement IPv6 under z/OS by minimizing the need for companies to make any changes to their applications, hardware, or networks. ZAG users can input IPv4 and get out IPv6, or they can do things the other way round.
And while z/OS sites are able to run separate IPv4 and IPv6 stacks – in fact, keeping the two IP stacks segregated like this may be something many chose to do because it enables them to keep their IPv6 testing traffic separate from their production IPv4 traffic – one thing they could use ZAG for is to sit in between the two stacks and act as a bridge, allowing IPv6 clients in one stack to access IPv4 applications in another stack (and vice versa). Obviously, sites could do achieve the same thing without ZAG, but they would need to weigh up the additional costs and risks (as well as management time) resulting from application, network, and hardware changes.
According to WDS’s press release, ZAG “allows customers to:
Back in the 1990s people (including me) were writing about the address limitation of IPv4, and the proposed solution was IPv6, which, after a very long gestation period, became commercially available in 2006. World IPv6 Launch day was as recently as 6 June 2012. The big selling point of IPv6 is that it uses 128-bit addressing, which results in far more addresses being available for people to use – 340,282,366,920,938,463,463,374,607,431,768,211,456 apparently.
An IPv4 address looks like: 192.169.1.62. An IPv6 address looks like: 2001:0db8:85a3:0042:0000:8a2e:0370:7334. And that’s the problem that many organizations face. How to they convert from IPv4 to IPv6? How do they translate their addresses? How do they test these new addresses when they need to maintain a working business environment?
William Data Systems (WDS) has come up with a new module for its ZEN system called the ZEN Application Gateway – or ZAG. WDS has found that many of its customers are currently facing the costs and risks of changes to applications, networks, and hardware. ZAG helps implement IPv6 under z/OS by minimizing the need for companies to make any changes to their applications, hardware, or networks. ZAG users can input IPv4 and get out IPv6, or they can do things the other way round.
And while z/OS sites are able to run separate IPv4 and IPv6 stacks – in fact, keeping the two IP stacks segregated like this may be something many chose to do because it enables them to keep their IPv6 testing traffic separate from their production IPv4 traffic – one thing they could use ZAG for is to sit in between the two stacks and act as a bridge, allowing IPv6 clients in one stack to access IPv4 applications in another stack (and vice versa). Obviously, sites could do achieve the same thing without ZAG, but they would need to weigh up the additional costs and risks (as well as management time) resulting from application, network, and hardware changes.
According to WDS’s press release, ZAG “allows customers to:
- Test new IPv6 applications using their existing IPv4 infrastructure
- Access their IPv4 applications from new IPv6 clients
- Segregate their IPv6 and IPv4 traffic on different IP stacks
- Act as a bridge, allowing traffic to connect between IPv6 and IPv4 stacks
- Provide pseudo Network Address Translation (NAT) capabilities in an IPv6 environment, which is very useful if you want to hide internal IPv6 addresses from the outside world.”
If you are at SHARE in Anaheim between 5 and 10 August, WDS will be demonstrating their ZEN suite on a Raspberry Pi. WDS comment this is: “just in case hardware budgets keep reducing!” Adding that one lucky delegate will win the Raspberry Pi in the SHARE prize draw.
Here’s a photo of their Raspberry Pi in action!
Moving from the limited IPv4 to IPv6 is something that people have been talking about for a long time. It now looks like the time to actually migrate is here. Anything that makes the job easier (and the painless) has got to be a good thing.
Moving from the limited IPv4 to IPv6 is something that people have been talking about for a long time. It now looks like the time to actually migrate is here. Anything that makes the job easier (and the painless) has got to be a good thing.
Subscribe to:
Posts (Atom)