Monday 31 August 2009

Trust, bad debts, and the economy as a whole

My whole business strategy is based on trust. Software and hardware vendors trust that when I sign a non-disclosure agreement, I won’t reveal their secrets ahead of the scheduled launch day. Companies that send me review units of hardware and software trust that I will actually review the unit and publish the review. And if organizations ask me to write something for them, I trust that they will pay me for it.

And this usually works very well. I’ve never published an article about a product that hasn’t been formally announced. And I’ve always written and published (somewhere) reviews of hardware and software I’ve been sent.

It’s the other side of the trust equation that has, from time to time, been a problem in varying degrees. For example, one company that advertised on the Virtual IMS Connection Web site (www.virtualims.com), which my company looks after, were sometimes a bit slow to pay their monthly invoice.

Technical Support magazine got further and further behind with payments for articles a few years ago until they eventually stopped publishing. The good news – for me and other contributors – is that they eventually paid all their outstanding debts. So well done to them, our trust was eventually rewarded.

On the whole, organizations that have asked me to write articles or internal documentation for them have been very good at paying at the agreed time. Similarly with the Arcati Mainframe Yearbook (www.arcati.com), sponsors and advertisers have been generally good at paying on time. Our trust relationship worked. But being a smaller company, iTech-Ed (www.itech-ed.com) needs to receive payments when promised because it has commitments to other companies that have got to be met. If company A withholds a payment to company B, then company B either has to withhold payments to company C or incur bank charges for a temporary overdraft. And finally, somewhere along the line in these days of credit crunch, someone is going to owe the bank so much that they’ll cease trading altogether.

I wrote a short article for a company a few years ago. Shortly after I sent the article, the company disappeared! Its Web presence was gone and no e-mails were ever replied to. It was the first (and last) time that I ever wrote for that organization and I lost an afternoon’s work. The trust relationship was completely broken.

More worrying for me, is when an article I have written at an agreed price has been published on the Internet, but the commissioning organization don’t pay at the agreed time and then stop replying to e-mails! I’m perfectly happy for quotes from my blogs or other articles to be quoted by other bloggers or article writers. That’s great – and it happens all the time. What I’m irritated about is companies that agree a price, commission an article, publish it, and then don’t pay. No matter how much the article costs, it’s cash flow for smaller companies that keeps the economy going – and that thing called trust.

I wrote an article entitled “Which browser is best for me” for Sift Media back in April. It’s now September and they still haven’t paid up. Perhaps, you’re thinking, that they didn’t publish it. Well, you can find it at http://www.accountingweb.co.uk/item/196884 and you can find a reference to it at http://www.simplyraydeen.com/faq/96-browsers/131-which-is-the-best-browser-for-me. It’s also apparently mentioned at http://www.infotechaccountants.com/forums/showthread.php?t=18036 and http://britanniaradio.blogspot.com/2009/04/editors-note-looking-at-budget.html and http://phentermineonline.to.pl/news/Which-is-the-best-browser-for-me,128756.html.

Which all seems like Accounting Web, which is owned by Sift Media, got good mileage out of my work.

The point I want to make – to large and small companies alike – is pay your bills on time. Putting someone on 90 days or more before you pay, puts the whole economy at risk simply because small amounts of money moving rapidly from one organization to another can lead to larger amounts changing hands, and soon the economy is back on its feet and we are all benefiting. One slow payer or one bad debt puts a spanner in the works for everyone! And, of course, breaks any trust that exists between two companies.

Monday 24 August 2009

CA Eclipsed

According to Wikipedia (http://en.wikipedia.org/wiki/Eclipse_(software)) Eclipse is a multi-language software development environment comprising an IDE and a plug-in system to extend it. It is written primarily in Java and can be used to develop applications in Java and, by means of the various plug-ins, in other languages as well, including C, C++, COBOL, Python, Perl, PHP, and others.

Eclipse started life as an IBM Canada project. It was developed by Object Technology International (OTI) as a Java-based replacement for the Smalltalk-based VisualAge family of IDE products. The Eclipse Foundation was created in January 2004. IBM’s Chief Technology Officer Lee Nackman claims that the name “Eclipse” was chosen to target Microsoft’s Visual Studio product.

Eclipse was originally meant for Java developers, but through the use of plug-ins to the small run-time kernel, other languages can be used. And there are Eclipse widgets.

And now CA has got in on the act. CA InterTest Batch and CA InterTest for CICS, now feature a Graphical User Interface based on the Eclipse Platform, which they claim makes it easier for new and experienced mainframers to execute core testing, and debugging tasks. The press release adds that these tasks historically have been time-consuming phases of the mainframe application development and deployment life-cycle.

What CA is claiming is that the new CA InterTest GUI helps developers re-use and re-purpose existing mainframe application code in order to further improve productivity and support Service Oriented Architecture implementations. It says that by plugging CA InterTest tools into their larger Eclipse-based integrated development environments, customers can more easily and seamlessly debug end-to-end composite applications that include mainframe, distributed, Web, and/or mobile components.

IBM has been making a big thing of Eclipse for a long time – well it would I suppose as it had a hand in its development. Its Rational mainframe tools integrate with Eclipse.

Also, Compuware announced a new version of its analysis and debugging tool last week, Xpediter/Eclipse 2.0. The company said that Compuware Xpediter helps the next generation of developers analyse applications and quickly understand the business processes and data flows in those applications, avoiding an unnecessarily steep learning curve. Xpediter/Eclipse 2.0 also helps these new developers become productive quicker by moving away from the traditional “green screen” interface and providing a modernized point-and-click environment, to which these new employees are accustomed.

This announcement more clearly points to the thinking behind these product updates – and that is that mainframers are getting old, so in order to keep the machines functioning there needs to be a way for younger people to become productive very quickly without learning the arcane ways of the machine – and Eclipse provides such an environment for them to work in. Watch out for more Eclipse-related announcements.

Sunday 16 August 2009

IMS Open Database

The latest webinar from Virtual IMS CONNECTION (www.virtualims.com) was entitled, “IMS Open DB functionality in IMS V11”, and was presented by Kevin Hite, an IMS lead tester with IBM. Kevin is a software engineer originally from Rochester, NY, who has worked in IBM for both WebSphere for z/OS and IMS. He is the test lead for IMS V11 Open Database and is the team lead for a new test area, IMS Solution Test. Solution Test is responsible for integrating new IMS function to customer-like applications in a customer-like environment.

IMS Open Database is new in V11, and Kevin informed the user group that it offers scalable, distributed, and high-speed local access to IMS database resources. The product allows business growth, and allows more flexibility in accessing IMS data to meet growth challenges, while at the same time allowing IMS databases to be processed as a standards-based data server.


What makes IMS Open Database different is its standards-based approach, using Java Connector Architecture 1.5 (Java EE), JDBC, SQL, and DRDA. It enables new application design frameworks and patterns.


One particular highlight Kevin identified with the new solution was three universal drivers. These include:

  • Universal DB resource adapter
    – JCA 1.5, which provides: XA transaction support and local transaction support; connection pooling; connection sharing; and the availability of multiple programming models (JDBC, CCI with SQL interactions, and CCI with DLI interactions).
  • Universal JDBC driver
  • Universal DLI driver.
With distributed access:
  • All Universal drivers support type 4 connectivity to IMS databases from TCP/IP-enabled platforms and runtimes, including:
    – Windows
    – zLinux
    – z/OS
    – WebSphere Application Server
    – Stand-alone Java SE
  • Resource Recovery Services (RRS) is not required if applications do not require distributed two-phase commit.
For local connectivity, Kevin informed us that the Universal driver support for type 2 connectivity to IMS databases from z/OS runtimes includes WebSphere Application Server for z/OS, IMS Java dependent regions, CICS z/OS, and DB2 z/OS stored procedures.

The two Universal drivers for JDBC – IMS Universal DB Resource Adapter and IMS Universal JDBC Driver – offer a greatly-enhanced JDBC implementation including:

  • JDBC 3.0
  • Local commit/rollback support
  • Standard SQL implementation for the SQL subset supported
    – Keys of parent segments are included in table as foreign keys, and allows standard SQL implementation
  • Updatable result sets
  • Metadata discovery API implementation
    – Uses metadata generated by DLIModel Utility as “catalog data”
    – Enables JDBC tooling to work with IMS DBs just as they do with DB2 DBs.
This is just a small part of a very interesting presentation and gives little more than a flavour of what IMS professionals can expect from IMS Open Database in V11 of IMS.

Sunday 9 August 2009

How old is old?

Picking up on my blog of a couple of weeks ago about COBOL reaching 50 this year, I thought it might be interesting to see just how old some of the technology we know and love actually is.

For example, CICS – the Customer Information Control System – has been around since 1969. Although we tend to associate CICS with IBM Hursley these days, it was originally developed at Des Planes in the USA and was called PU-CICS, with the PU bit standing for Public Utility. In the early 1970s, development was at Palo Alto, but moved to Hursley in 1974.

IMS – Information Management System – is even older, having first appeared in 1968. IMS was developed for the space race and contributed to the success of the Apollo program. It’s said, but no-one knows outside of IBM, that IMS is IBM's highest revenue software product. If you’re not already aware, I organize the Virtual IMS Connection user group at www.virtualims.com. It’s free to join and you get six free Webinars a year and six free user group newsletters. But I digress.

Batch processing goes right back to the very early days of computing in the 1950s.

TSO, or Time Sharing Option, first appeared in the early 1960s. Originally, and there’s a clue in its name, it was an optional extra on OS/MVT (Operating System/ Multiprogramming with a Variable number of Tasks), a precursor to MVS. TSO became a standard feature with the release of MVS in 1974. ISPF (Interactive System Productivity Facility), which is associated with TSO, didn’t appear until the 1980s.

DB2 – database 2 – first appeared in 1983. DB2 is a relational database, and as well as on mainframes, turns up on PCs and other IBM platforms. It competes with Oracle and Microsoft’s SQL Server products on these other platforms. Oracle appeared in 1979.

Mainframes themselves were developed during the 1950s.

The World Wide Web is meant to have come into existence in 1991 – thanks to the work of Tim Berners Lee.

IBM came into existence in 1924, when a company called CTR changed its name to IBM. It had been trading as IBM in Canada since 1917.

Microsoft was founded in 1978 – but enough about them.

Citrix was founded in 1989 by Ed Iacobucci and others who’d worked on the ill-fated OS/2 project at IBM.

COBOL’s 50. Java was first released by Sun Microsystems in 1995. It was based on the work of James Gosling. The language was initially called Oak. The international standard for C++ came in 1998.

It’s interesting looking back and realizing just what a significant effect these golden oldie technologies have had, and how they will continue to thrive into the foreseeable future.

Sunday 2 August 2009

zPrime rattles a few cages

I seem to be spending the summer talking about zIIP and zAAP (System z Integrated Information Processor, and System z Application Assist Processors). And a couple of weeks ago I was enthusing about NEON Enterprise Software’s new zPrime product and how users should get it and save money before IBM changed the rules.

And I’m inclined to still think that way, it’s just that IBM has responded to the announcement much faster than I imagined.

For people who’ve been living off-planet, IBM charges users by the amount of General Purpose Processor (GPP) they use, while also making specialty processors available for things like Linux and DB2. Now, doing your processing in a specialty processor saves money because you’re not using the chargeable GPPs – and in real life can save money by putting off the need for an expensive upgrade. Into this situation comes the zPrime bombshell. Using their new software, NEON reckons that 50% of workloads can run on specialty processors – that’s not just DB2, that’s IMS, CICS, TSO/ISPF, batch, whatever.

Not surprisingly, at the thought of seeing their potential revenue cut in half, IBM has taken a dim view of the announcement. In a recent customer letter, IBM’s Mark S Anzani, VP and Chief Technology Officer for System z, cautions customers about the zPrime product. Apparently, customers with questions about IBM’s position on zPrime can contact Mark on anzani@us.ibm.com.

The customer letter contains the following paragraph:
“In general, any product which is designed to cause additional workloads, not designated by IBM or other SW providers as eligible to run on the Specialty Engines, to nevertheless to be routed to a Specialty Engine should be evaluated to determine whether installation and use of such a product would violate, among other things, the IBM Customer Agreement (for instance, Section 4 regarding authorized use of IBM program products such as z/OS) and/or the license governing use of the IBM “Licensed Internal Code” (frequently referred to as “LIC”) running on IBM System z servers, or license agreements with any third party software providers.”

NEON sent out a press release on 16 July saying: “NEON Enterprise Software is responding to a massive wave of interest over a newly-released software product called NEON zPrime that saves mainframe users millions of dollars in IT costs by realizing the full potential of IBM System z specialty processors.”

And how do other software vendors feel about this? CA – probably the biggest apart from IBM – ignores the announcement. The latest e-mail I have says that Chris O’Malley, executive vice president and general manager of CA’s Mainframe Business Unit, will deliver a keynote address to SHARE in Denver. As usual, no word from BMC. No it’s a PR company for DataDirect that drew my attention to Gregg Willhoit’s blog (http://blogs.datadirect.com/2009/07/ibm-cautions-customers-about-neon-enterprise-softwares-zprime-product.html) and the IBM customer letter (http://blogs.datadirect.com/media/IBM%20position%20document.pdf) on the DataDirect site.

It will be interesting to see how many customers install zPrime and what happens next.