Sunday, 26 June 2022

Hybrid environments

I can remember, as I’m sure you can, back in the 1990s when everyone was sure that the mainframe was pretty much on life support and anyone who wanted to stay in business should be migrating all their applications to Linux servers or Windows servers.

To be honest, it was a bad time for IBM. The 1990s was probably the last decade when different computing platforms pretended that other computing platforms didn’t exist, and that their model of how to do things was the only way. By the end of the decade, things like SNA were more-or-less moth-balled and everything had an IP address. The mainframe had realized that it needed to be part of the bigger computing environment.

As always in the history of the world, some companies (or empires) became huge and then disappeared, others soldiered on. Everyone seemed to work on laptops that were running Windows or Linux, and they was able to communicate with any servers including mainframes.

The rapid improvements in PC technology meant that people could work from almost anywhere without needing cables constantly plugged into their personal computers. Battery life became long enough to last all (the working) day – so, no need to plug into the mains – and WiFi meant there was no need to plug into the network.

Mainframes continued the trend of being a part of the wider IT environment. If anything worked well on distributed platforms, it was very likely to work better on a mainframe. Linux became part of the mainframe environment. In fact, it’s possible to run mainframes just with Linux, and not have z/OS at all.

Containerized applications, Kubernetes, and much more will work on a mainframe. Things like Ansible, for IT automation, work on a mainframe. Zowe allows non-mainframers to control mainframes. VS Code can be used to develop applications on a mainframe. DevOps is a mainframe thing. And there’s so much more. The point I’m making is that mainframes are very much open to utilizing the best technology around. It’s not parochial, it’s not old-fashioned, and it has been developing hugely since it first appeared in the 1960s.

Mainframe APIs can connect to APIs on mobile platforms, the Web, and cloud, to create new and exciting combined applications using REST. The mainframe is a player in the total computing environment.

However, the 2020s has seen a return to some of the outsider views of the 1990s. History seems to be repeating itself. ‘Experts’ are saying that the only way forward, the only way to modernize the mainframe, is to dump it. The mainframe should be consigned to the history books. And, rather than migrating everything off the mainframe to Linux or Windows, in 2022, the new Promised Land is cloud. Once every bit of workload has been migrated to the cloud, all your computing worries will be over!

If experience has taught us anything, it’s that this kind of ‘all or nothing’ thinking is seldom if ever right. In fact, ‘all or nothing thinking’ is recognized by Cognitive Behavioural Therapy (CBT) as one of a number of common cognitive distortions.

So, if the mainframe is so wonderful, as I’ve been explaining, should we simply ignore the cloud? The one-word answer to that is an obvious and resounding ‘no’. The cloud offers some huge advantages over the mainframe for certain types of workload. Cloud should be treated like laptops in so far as they are both very good at what they do, and should be included as part of the total IT environment used by an organization.

When deciding which workloads it’s worth considering migrating to the cloud, it’s probably worth looking at things the other way round, ie which workloads need to stay on the mainframe. There are probably two types of application that need to stay on the mainframe. The first are those applications with high security needs. Mainframes are clearly the most secure platforms available with things like pervasive encryption, which became available with the z14 mainframe. Data is encrypted while at rest as well as in flight. IBM also introduced Data Privacy Passports.

The second types of application that should stay on the mainframe are those that use a large amount of data. IMS DB, in particular, was developed to allow data to be accessed very quickly in order to speed up the elapsed time for running an application. Where accessing and using data is time critical, then those applications need to stay on a mainframe.

There is a third type of application that’s best left on the mainframe and that is for sites that have a z16 mainframe, with its Telum chip, and are concerned about credit-card fraud. The Telum processor provides on-chip AI inferencing, which allows banks to analyse for fraud during transactions on a massive scale. IBM assures us that the z16 can process 300 billion inference requests per day with just one millisecond of latency. Users of the z16 will be able to reduce the time and energy required to handle fraudulent transactions on their credit card. For both merchants and card issuers, this could mean a reduction in revenue loss because consumers could avoid the frustration associated with false declines where they might turn to other cards for future transactions.

Any other workloads can then be evaluated to see whether they will benefit from being run in the cloud, and they can be migrated to that environment. It’s also possible that certain applications can only run in the cloud – things that use big data for data analytics. In addition, application development work often progresses faster using the cloud.

So, hybrid working using the best features of the cloud makes a lot of sense, in the same way that using the best laptops and tablets makes sense in order to get the job done. But, a still very important part of the digital environment for any large organization is the mainframe. Don’t throw the baby out with the bathwater.

Sunday, 19 June 2022

Mainframes, cloud, and technical debt


When it comes to developing new software on any platform – not just mainframes – we’re probably all familiar with the three choices available. It can be good, or quick, or cheap. And you get to choose two out of the three.

If you want it done quickly and you want it to be good, then it can’t be cheap. If you want it developed quickly and cheaply, then it won’t be any good. Or if you want it to be cheap and really good, then it will take a long time.

Those of you who have worked in project management have probably come across Brooks’ Law. Before we talk about that, let’s look at a simple maths problem. If it takes a man (in these more enlightened times I obviously mean a person) five days to build a wall, how long will it take two men? With the simplicity of mathematics, the answer is clearly two and a half days. So, if you apply that logic to your IT project, the more people you have working on the project, the fewer days the project will take! Anyone who has ever worked on a project knows that just isn’t true. And that’s where Brooks’ Law comes in.

Fred Brooks published his book, “The Mythical Man-Month” in 1975. He concluded that adding manpower to a late project simply made it later. He explained this by saying that it took time for new people to get the hang of a project – what he called the ramp up phase. During that time, other project members were less productive because they were showing the new people what to do. And the new people weren’t productive until they knew what to do.

The other problem he identified was the communication overhead. As more people join a project, more time is spent finding out what everyone else is doing on their part of the project.

While he agreed that some tasks were simple and could easily be divided up amongst a group of people – eg cleaning rooms in hotels – even that reached a point where adding more people would mean they just got in each other’s way. And, besides, most projects require specialist work and aren’t that easily divisible.

Going back to our original three pillars of project development – quick, cheap, good – one common result of doing something quickly is what’s called technical debt. Technical debt is the implied cost of additional reworking on a project caused by choosing an easy (limited) solution now rather than using a better approach that would take longer. A proof-of-concept project is usually full of technical debt, but that’s usually acceptable because all that was needed was to prove that something good be done. However, for most projects, like financial debt, there will come a time in the future when that debt has to be paid. In terms of the project, it means that the software will have to be updated or even completely rewritten.

The consequence of that is that it will take time and will have a cost-implication. The question that organizations face is whether to get the code right from the start – with the implication that the project will take much longer than was planned. And that means it will cost more than originally planned. Or does the project team go ahead with the project as is, and let a different project team fund the technical debt they have left at some time in the future when the original team are probably working for different companies or doing different jobs for the same company? At least the original team will make their deadlines and receive any bonuses associated with that.

For mainframe sites, ‘paying’ the technical debt can result in downtime for the application and a particular service being unavailable to internal or external customers. That can result in service level agreements not being met, which can have financial consequences in addition to those associated with rewriting the code.

Internal projects are one thing, but do the same rules apply when you are buying software from a known mainframe software vendor? Is it worth skipping Version 1 and waiting until all the kinks have been ironed out and waiting for Version 2 to be released? And does the same apply to new software services like those offered by cloud providers?

For cloud providers like AWS and others, there’s money to be made from companies that are currently running their own servers, and there’s even more money to made from companies that have mainframes. And that’s why mainframe sites are being targeted by the marketing arm of these cloud giants.

That’s why AWS has announced its Mainframe Modernization service, its development and runtime environment, which is designed to make it easier for customers to modernize and run their mainframe workloads on AWS. AWS also offers mainframers the option to maintain existing applications as they are and re-platform them to AWS with minimal code changes.

All this is powered by the Micro Focus Enterprise Server, which lets IBM mainframe applications run on Linux or Windows servers. There are no upfront costs, and customers only pay for what they use.

System integrators will work with mainframe-using organization to discover mainframe workloads, assess and analyse migration readiness, then plan the migration and modernization projects.

That all sounds very good. I guess the question at the back of my mind is whether this migration will create any technical debt.

Sunday, 12 June 2022

The importance of trust in business


All mainframe sites face the same problem that they’ve been facing for 50 years. Do you buy the best piece of software available for one specific job, for example a piece of monitoring software, or do you buy most of your software from a single vendor because you expect that each piece of software will work with the other software from that same vendor?

Then there’s the issue of whether to buy a piece of software that does everything that you want it to do at the moment, but also comes with a whole load of other facilities and features built-in that you might want to use in the future (or might not). Or do you go for a cheaper software package that does exactly what you want and no more. Might you want it to offer additional features at some time in the future?

And does it really matter? At the end of the day, isn’t the whole point to get your work done? Whoever looks after the budget would like that to be done as cheaply as possible. For you, as long as the software you use does the job, it doesn’t really matter who supplied it, does it?

That last thought seems to have been the attitude of IBM in its dealings with AT&T. And the consequence has been that IBM has been ordered to pay BMC Software $1.6 billion for fraud and contract violations. And all IBM did was migrate AT&T away from BMC’s products to its own.

So, here’s the story. In 2007, AT&T were using BMC’s software. In 2008, IBM and BMC drew up a contract that governed the business relationship between the two companies. In 2015, the companies agreed some amendments to the contract. One of those was an Outsourcing Attachment (OA) that was meant to prevent IBM from migrating customers of both companies to using their own software. Oh dear!

Later in 2015, IBM started an internal initiative called Project Swallowtail to migrate AT&T from BMC’s software to its own. In 2017, BMC sued IBM, claiming that IBM already planned to breach their agreement and poach AT&T’s software business when the two companies renewed their power-sharing deal in 2015. IBM argued that AT&T rejected BMC’s products and chose to use IBM’s for its own reasons, which it claimed was fair game under its pact with BMC. What actually happened was AT&T replaced 14 BMC software products with IBM products. There were also five BMC products that were replaced by third-party software, and one BMC product was retired.

And in 2022, US District Judge Gray Miller finally gave his decision on the case. IBM was ordered to pay BMC Software a whopping $1.6b. The ruling came after a seven-day non-jury trial in March.

Why were the damages awarded against IBM quite so punitive? According to the written findings, “The court finds by clear and convincing evidence that IBM fraudulently induced BMC into entering the 2015 OA so that it could exercise rights without paying for them, secure other contractual benefits, and ultimately acquire one of BMC’s core customers”. It goes on to say that IBM did this intentionally.

The report also says that BMC were happy with the 2015 amendments because they thought that it “would put IBM’s troubling history of non-compliance to bed”. Sadly, it didn’t, and IBM appears to have taken AT&T’s custom away from BMC.

The judge also wrote “IBM’s business practices – including the routine eschewal of rules – merit a proportional punitive damages award”.  Later the judge said IBM “believed – especially in light of BMC’s reluctance to engage in litigation – that it could ‘always settle for a small percentage of the claim, or for ‘pennies on the dollar’”. Finally, he wrote, IBM’s conduct vis-à-vis BMC offends the sense of justice and propriety that the public expects from American businesses.”

Did IBM own up and say, “it’s a fair cop”? No, what they said was IBM has “acted in good faith in every respect in this engagement”. IBM went on to say, “This verdict is entirely unsupported by fact and law, and IBM intends to pursue complete reversal on appeal”. IBM also asserted, “The decision to remove BMC Software technology from its mainframes rested solely with AT&T, as was recognized by the court and confirmed in testimony from AT&T representatives admitted at trial”.

If you want to know how Judge Miller got to the figure of $1.6b. He awarded $717,739,615 in actual contractual damages, $168,226,367.29 in prejudgment interest, and another $717,739,615 in punitive damages. He then added on post-judgment interest of roughly 2%, compounding per annum. That give a total of $1,603,705,597.29.

It wasn’t all good news for BMC. The judge rejected their bid for findings of lost profits, additional breaches of contract, and unfair competition. However, and I don’t know how unusual this is in US courts, the judge said that if a reviewing court finds that BMC isn’t entitled to the judgment he issued, the company could come back and seek recovery under one of its alternative legal theories.

Definitely an interesting case. I’m sure we’re going to hear more about it.