Sunday 14 July 2024

Interesting browser updates

I was checking on Statcounter to see how popular different browsers were. I wasn’t surprised to see that Google’s Chrome was the most popular with nearly two-thirds (65.68%) of the market share. Safari came second with 17.96%, which probably gives an indication of the percentage of Macs, iPhones, and iPads in use out there. In third place is Edge. Everyone who has bought PC will have Edge as the default browser. To be honest, the first thing I do when I get a new laptop is download a different browser – and, judging by the figures, so do lots of other people. Firefox is fourth with 2.75%. I always used to use Firefox, and I liked using it. I just didn’t install it on my newest laptops. C’est la vie! I was surprised to see Samsung Internet in fifth place. I’d never considered using it, and I have a Samsung phone. It scored 2.58% of market share. Sixth was Opera with 2.26%.

Looking at figures for just North America, it came as no surprise to see Apple’s browser had nearly a third of the market share at 31.74%. Chrome had over half at 52.55%. In Europe, the figures were still in the same order, but Chrome had 61.89% of the market and Safari had 18.55%.

Still, whatever browser you choose, it’s still just a browser – and you only use it to access your webmail, or get to Amazon to do your shopping, or check your bank balance, book holiday, or go to a million other websites, don’t you?

Once you’ve personalized your browser, and got it to remember the user-id and password you use for the websites you visit frequently, and, especially, the ones you only visit once a year, you don’t really want to change it. After all, what extra could a different browser do?

I’ve just started using Opera, or Opera GX as it calls itself. Opera, the browser, has been around for 25 years and is available on laptops and mobile phones, and has recently had some new updates to its built-in artificial intelligence (AI) called Aria, which adds some interesting new features.

Firstly, it has the ability to turn text prompts and descriptions into unique images using the image generation model Imagen2 by Google. Aria identifies the user’s intention to generate an image based on conversational prompts. Users can also use the ‘regenerate’ option to have Aria come up with a new image. Aria allows each user to generate 30 images per day.

Secondly, Aria can now read answers out loud by using Google’s WaveNet model. It benefits those who normally use screen readers, like to multitask, or need to hear information instead of reading it. To get this to work, I was using the command line, I had to click on the speaker icon in the bottom right corner to have Aria read the text response. It was easy to pause the speaking by clicking the pause button that replaced the speaker icon. Clicking the speaker icon again restarted the dialogue.

Thirdly, it’s gaining contextual image understanding. They say that Internet users find themselves searching for information about something they saw just as often as for something they read or heard about. So, Aria is also gaining image understanding capabilities. This means that users can now upload an image to Aria. As part of the chat conversation, users can then ask the AI tool about it. For example, if the image is an unknown headset, Aria will identify its brand and model as well as provide some context about it. Or a user can take a picture of a maths problem and ask Aria how to solve it.

To get this to work I had to download the developer version of the browser and create an account, and sign in. Once I’d done that, I clicked on the ‘+’ button on the right of the chat input box, and then selected the ‘upload image’ option. The explanation of the context of the image was quite good.

As part of the update, the text-based chat experience with Aria has also been improved with the addition of two new functionalities: ‘Chat Summary’ and ‘Links to Sources’. The former provides users with a concise summary of an entire conversation with Aria, allowing them to recap the most important information. In the latter feature, Aria supplies the user with links to sources about the topic of the conversation, enabling them to get more context regarding their enquiry. In addition, the Aria command line in the browser can now be easily activated by pressing the ‘ctrl + /’ or ‘cmd + /’ button combination. This enables the user to open the additional floating window instead of using Aria from the extension page. There’s also a small icon on the left-hand side of the browser that opens up Aria.

Features that were already part of Opera GX that you might be interested in include: RAM, CPU, and network limiters, a built-in free VPN (virtual private network), Twitch and Discord integration (chat facilities used by gamers), and a built-in ad blocker

I’m quite enjoying using the browser. You might want to give it a try.

 

Sunday 30 June 2024

Mainframe security – there really is a war going on

In the mainframe world, everyone has been talking about security for a very long time. In fact, I’ve seen some people yawn as the topic of security comes up again – “been there, done that, got the T-shirt” they say. But it’s not that easy. Just because all the security you had in place last year seems to have worked, doesn’t mean that it is secure enough for this year. There is a veritable arms race going on and no-one can afford to be complacent.

When I say no-one, I mean no-one in an organization can be complacent, perhaps least of all the chief financial officer (CFO). It’s the CFO’s job to safeguard their organization’s reputation and to save their company money. That was the job of the CFO at the USA’s second biggest health insurer, Anthem, which was hacked in December 2014. Nearly ten years later, the substantial cost to the company is only finally becoming clear.

That cyberattack saw 79 million individual's personal information compromised. Firstly, Anthem agreed to pay $115 million to those people whose information was potentially stolen. The plaintiffs’ case was that Anthem should pay their costs of checking whether the exfiltrated data was being used nefariously by anyone else. Then in 2020, Anthem agreed to pay $16 million to the US Department of Health and Human Services, Office for Civil Rights (OCR) and take substantial corrective action to settle potential violations of the Health Insurance Portability and Accountability Act (HIPAA) Privacy and Security Rules. Also in 2020, the company paid $39.5 million as part of a settlement with US states attorneys general from 44 states and Washington, DC. On top of that, there may well have been payments by Anthem for the ransom, and for technical experts to try and resolve the attack. All-in-all, a hefty pay out for any organization.

And that wasn’t a one-off attack. According to the Cost of a Data Breach Report from IBM Security, the average cost of a data breach is US$4.45 million. For companies, like Anthem, in the healthcare sector, the average cost of a data breach was US$10.93 million.

In the UK just recently, hospitals and GP practices found Russian hackers had infiltrated and rendered unusable the IT systems of Synnovis, a company that analyses blood tests. That led to hospitals having to cancel operations etc. From personal experience, I know of a small web design and hosting company that says its web sites are under constant attack. And I know of local secondary schools that have been attacked.

Everywhere and everyone that has any kind of tech is currently under attack. And, they need to do their bit in the arms race that’s taking place between us – I’m assuming we’re the good guys are reading this – and the people who are trying to hack your system.

Oxford Capital recently sent out a press release reminding us that the World Economic Forum has shown that ransomware attacks have increased by nearly 300%, with over 50% of these attacks specifically targeting small businesses. Oxford Capital then highlighted the top AI security threats organizations need to be prepared to combat. They were:

  • AI-powered phishing attacks using AI to create highly-convincing and personalized emails. These attacks are designed to deceive employees into revealing sensitive information or downloading malicious software.
  • Automated vulnerability exploits. Hackers are using AI to scan for and exploit vulnerabilities in software systems at an unprecedented speed and scale. That’s why installing patches is such a priority.
  • Deep fake scams are where cybercriminals use AI to create realistic audio and video impersonations of company executives. These deepfakes can be used to manipulate employees into transferring funds or sharing confidential information.
  • AI-driven ransomware allows attackers to efficiently target, copy, and encrypt critical business data. 
  • Malicious AI bots can be used to conduct malicious activities such as credential stuffing, where bots attempt to gain access to accounts using stolen credentials. 
  • Weak passwords are a major cybersecurity threat because they can be easily guessed or cracked, allowing unauthorized access to sensitive information.

The suggested solutions given by Oxford Capital include:

  • Strong password policies. If you don’t already do this, use complex passwords and update them regularly.
  • Multi-factor authentication (MFA) requires a user to present two (or more) items or factors to an authentication mechanism before they are given access.
  • Regularly update software to ensure that the latest security patches are installed and no easy-access back doors (vulnerabilities) are anywhere on your system.
  • Employee training. I’ve been part of this kind of exercise where you give everyone in your organization training to recognize phishing attacks and other cyber threats, and then later test random attendees. Even so, you still find staff click on your dodgy email. Therefore, I would suggest that training is ongoing.
  • Use robust cybersecurity measures. They recommend users invest in comprehensive security solutions to detect and respond to threats efficiently. I would suggest mainframe-related products like File Integrity Monitoring (FIM) from MainTegrity to provide not only protection, but also early warning if some kind of attack is taking place, as well as automation to suspend jobs and users until you’re sure they really are allowed to do what they seem to be doing to your mainframe.

The list might have added using air-gapped hardware to protect back-ups from being overwritten. As well as routinely protecting data in transit from being stolen.

What I’m suggesting is that everyone needs to take steps to protect whatever data they have on their computing platforms, including the cloud, and people with the most to lose, like mainframers, need to absolutely keep one step ahead in the data security arms race. And the CFO, and other top execs, need to make sure the IT team have everything they need in order to do that. After all, it’s those top execs who will be paying for it if mainframe security isn’t as good as it needs to be.

 

Sunday 16 June 2024

IBM versus LzLabs

The IBM UK court case against Swiss-based LzLabs and UK-based Winsopia highlights a number of important issues. Firstly, it seems natural justice that if your company has spent money developing some technology that you should have every right to copyright it and prevent other people from using your original work without paying for it. Secondly, you should be able to choose who you licence your technology to, and they should be expected to pay for that licence. That is basically IBM UK’s case. It is saying that LzLabs has taken their tech and is using it as if it were their own. They want LzLabs to cease and desist.

LzLabs has a different view. It is saying that it has found a way to emulate a mainframe and do it in software. It is a completely different thing, and IBM, being a large organization, is using its size and weight (metaphorically) to prevent LzLabs from lawfully conducting its business.

At its heart, that’s what this court case is all about.

What exactly has LzLabs done? Their product allows customers to migrate off IBM mainframes and onto other hardware platforms without making changes to the software they are running. Obviously, if enough customers do that, it’s going to affect IBM’s revenue stream going into the future. But that’s not what this court case is about. What IBM is asserting is that it is “inconceivable” that LzLabs, and its UK subsidiary Winsopia, could have developed their migration software without illegally reverse engineering IBM’s technology.

By focusing on that aspect, IBM can make it seem they are not being bully boys and trying to prevent a potential competitor. They are, quite rightly, protecting their copyrighted material, which they have created, at great expense, over a number of years. And, put that way, it seems right and proper that IBM should sue.

Of course, LzLabs is saying that its tools were developed lawfully in keeping with the EU Software Directive and UK law, which encourages innovation by competitors. So, they have done nothing wrong.

We all know about IBM, its mainframes, its cloud, its work on AI, and its business in general. LzLabs, you may recall, launched the Software Defined Mainframe (SDM) in 2016, which, as mentioned earlier, provides a way for mainframe applications to run on other platforms, eg Linux.

IBM wants LzLabs to stop selling its product, which IBM claims is using IBM’s own software.

You might be wondering where the UK company Winsopia comes into all this. Well, IBM claims that Winsopia leased an IBM mainframe (that bit is not disputed), but it then breached its licence, and it was that breach which allowed LzLabs to develop SDM. What was the breach? Basically, it reverse-engineered and reverse compiled the mainframe software. That allowed LzLabs to understand the design and structure of the mainframe software, and allowed them to recreate it. That was prohibited by the contract between IBM and Winsopia. In fact, IBM is suggesting that Winsopia is a shell company whose sole purpose is to act as a front for LzLabs and gain access to IBM equipment and software.

LzLabs and Winsopia, not surprisingly, insist that the contract wasn’t breached and that they were able to build SDM because they had spent years observing, studying, and testing how customer applications interact with mainframes. LzLabs claims that it has a team of experienced engineers, and they used information published by IBM about its technology. Plus, there’s widespread industry knowledge about mainframes. In addition, LzLabs states that it could never directly access Winsopia’s mainframe.

The defence team also affirmed that SDM was functionally completed in 2013, which predates the creation of Winsopia.

Let’s turn our attention to a product from ColeSoft – its source-level assembler debugger, z/XDC, which first became available in 1980. IBM’s expert witness, Michael Swanson, has been in court proposing that LzLabs’ use of z/XDC was “invasive”, suggesting that LzLabs had used the debugger to disassemble IBM’s modules.

The LzLabs defence team, however, suggested that Swanson appeared “to have no real-world mainframe knowledge” since 1999. The lawyers showed that z/XDC is widely used by “big players” in the mainframe arena “for the purpose of developing commercial software”. Swanson agreed that using z/XDC for testing and debugging was not an “unusual or uncommon” use of the tool.

The court case continues.

LzLabs is owned by John Moore, and this is not the first time one of his companies has faced IBM in the courtroom. Moore founded NEON Enterprise Software (in 1995), which developed a product called zPrime.

As you know, IBM charges users by the amount of General Purpose Processor (GPP) they use, while also making specialty processors available for things like Linux and Db2. Now, doing your processing in a specialty processor saves money because you’re not using the chargeable GPPs – and, in real life, it can save money by putting off the need for an expensive upgrade. zPrime allowed users to run an estimated 50% of their workloads on specialty processors – that’s not just Db2, that was IMS, CICS, TSO/ISPF, batch, whatever. IBM sued NEON in 2009, and it was settled in May 2011. NEON Enterprise Software lost and disappeared.

It will be interesting to see how the LzLabs case goes over the next few weeks.

 

 

Sunday 9 June 2024

Making good decisions

Picture the scene: you’re sitting in the boardroom representing the mainframe team, and sitting with you are the new cloud team, and the established distributed team, and there’s also some people from finance, and even a couple of users. The meeting starts, chaired by the CEO, who wants to get involved in such an important decision for the organization. Maybe you’re deciding on the best platform for some new application that’s going to be used. Perhaps you’re making choices for what should be included in next year’s budget and where it should be spent. Or maybe your company wants to introduce artificial intelligence (AI) in all its customer-facing applications. Or, it might be some other big project.

I would guess, with very few exceptions, you’ll be championing the mainframe as the best platform to use. However, the other IT people will be championing their platforms equally enthusiastically. How does the CEO make a choice with the conflicting expert advice he’s getting and with his own biases?

Let’s look at cognitive biases first. These are biases people have (like thinking vaccinations are bad for you, or a political party is always bad, or mainframes are always best) leading them to draw erroneous conclusions. Your CEO can overcome his own biases by getting information from a variety of sources.

The CEO’s decision-making process means that they need to weigh up the various options and determine the best course of action. That means the mainframe guy (you) needs to come to the meeting with more than your gut feeling about what’s right and your natural biases. You need to bring some real-life examples. You need to be able to demonstrate where other mainframe sites have successfully implemented whatever is under discussion – or, at least, something similar. If no-one else has done something very similar, it might be possible to break down the task under discussion into smaller component parts and illustrate where they have been successfully used on a mainframe, and where they have been unsuccessfully used on any other platform.

The next stage for the CEO is to analyse the arguments that are being put forward by the different groups at the meeting. He needs to interpret what has been proposed, and then draw conclusions based on the information in front of him. He needs to judge the information’s merit, accuracy, and appropriateness. He needs to check that the information is from a reliable source – just on being the Internet may not always make it reliable. The CEO needs to identify any assumptions made by the people putting forward different proposals (such as “the cost of cloud computing is likely to remain low over the next three years”), and also identify any biases in individual’s arguments. This can be done by him actively questioning proposals or arguments being made.

For many big decisions, there is plenty of data available from different sources that can be checked for reliability (accuracy) and then analysed. The information drawn from this needs to be valid, relevant, and significant. This information can be used to support the claims or assertions of the different groups at a meeting.

Lastly, the CEO needs to summarize the arguments that have been put forward, ensuring that he has understood them completely. We all know companies that have moved applications off their mainframe hardware because the cost of software is much cheaper on distributed systems. It’s only later that they find they not only need to spend more on hardware to run their new software, they also need more people to run the additional hardware. In the end, their off-mainframe budget can be higher than staying on the mainframe. It’s looking at all aspects of a potential solution that’s important at this stage. The evidence put forward needs to be from credible sources and needs to be complete.

The mainframer at the meeting needs to be a good communicator in order to put forward well-reasoned arguments for their particular point of view, and argue against other opinions.

The CEO, in moving to a final decision, needs to weigh the competing evidence. Some evidence will corroborate or support a proposal. Some, from multiple sources, will be convergent and support the same conclusion. Some will be contradictory, and some may be conflicting. The CEO needs to keep in mind the issue that this meeting is trying to address, the desired outcome of the solution proposed. He needs to look at the outcome of the different proposals in computing terms, in terms of cost and profit, in terms of its impact on staffing numbers and morale, in terms of the reputation of the company, and many other aspects. Each proposed solution can then be evaluated against these and any other relevant criteria. A positives and negatives table could be drawn up to do this. Usually, different criteria are weighted differently. At the end, a final solution can be settled on, and a rational decision can be made.

The next stage is the impact analysis and communication with people affected. If the impact involves people losing their jobs, then plans need to be put in place to offer retraining for newly-created jobs in the organization or for filling other vacancies. Otherwise, staff must be helped to deal with redundancy and get work elsewhere.

If only customers are going to be affected, then advertising and social media can be used to explain how much better things will be. If employees will be impacted, it’s important to ensure that carefully-crafted messages are sent out explaining exactly what changes are taking place, and how that will benefit the people who will benefit, and how those impacted by the change will be helped into new roles.

This discussion uses ideas taken from critical thinking. This is a technique that can be used to find the best solution to a problem and then implement it successfully. It’s designed to identify alternative ideas and test them out. It should help overcome cognitive biases. And it should help to analyse data. The last stage would be self-reflection, where a person can review how well each stage was handled, what personal thoughts and experiences occurred, and what personal lessons were learned.

Using these ideas can help any mainframer prepare for those important meetings that may be coming up.

Sunday 12 May 2024

Just a simple IT upgrade!

I was working hard on the laptop on my desk. I had Chrome open to read my email – that’s Yahoo as a catch-all, Gmail for one particular email address, and Outlook for email for an organization I am working with. I also had Facebook open. Then I had Edge open to run Microsoft 365 for another organization I work closely with. Then there was my desktop Outlook app for a third organization I have close ties with.

I was typing into Word to use its spellchecker and grammar checker. I had InDesign open for its publishing layout facilities. I had Photoshop open to create a graphic that was going into InDesign. Excel was open because I was doing some sums on it – for my accountant. And I had Sticky Notes open with a list of all the tasks I had to complete that day. I also keep Notepad open for when I want to take all the formatting out of any text that I’m copying from one app and pasting in another. A standard day.

I was jumping about from one application to another, I thought that I really could do with more screen real estate. I needed to see more of these applications at the same time. I remembered an ad I’d seen on Facebook a while back offering two additional screens to fit round your laptop’s screen. I looked on Amazon and they had the same thing available. The one criterion was that you needed a USB-C socket on your laptop. I did, so I ordered these extra screens. Happy days!

However, not all USB-C sockets are the same! The one I had was quite old. It was the right size and shape, but didn’t seem to handle video information transfer. Not to worry, I thought, I was going to buy a new laptop in the next few months, I’ll buy a new one with the latest USB-C ports, and I’ll make sure it has two of them. It seems it was many years ago that I last bought a new PC because my old one had the wrong holes in the back to connect peripheral devices. I ordered a new laptop with two USB-C ports, a 17 inch or so screen, and a 1TB SSD drive. All good, I thought.

Unfortunately, one of the USB-C ports was used to power the laptop. So, effectively, I only had one usable port. But that was OK – it connected to one screen, and I used a USB-A port and an HDMI port to connect to the other. Everything worked. But, of course, I hadn’t installed any software.

Firstly, I downloaded Chrome as my browser. Then I went to Microsoft 365 admin and downloaded Word, Excel, etc. No problems. I installed the printer drivers, and I could print. I installed Filezilla so I could upload files to my websites. I downloaded Audacity and FFmpeg for Audacity so that I could edit the sound files for my podcasts. I added Adblock Plus to my browser to reduce those pesky adverts. I downloaded VLC to watch videos. And I downloaded Windows 10 Firewall Control (even though I have Windows 11) to lock down my laptop. It was all going well.

I downloaded and paid for a new copy of Wondershare Filmora to be able to make and edit videos for my YouTube channel. I added the snipping tool and Notepad to my tool bar because I use them both a lot.

Then I wanted to install my old copy of Adobe Creative Suite (CS6), which is the last version you could actually own rather than pay monthly for. It installed it and it worked perfectly. Then I tried to restart my laptop, but it just wouldn’t start. I had to find the recovery key and use that to get to a screen that would wipe everything and restart. I tried installing CS6 first this second time, but the same thing happened when I restarted my machine. So, I wiped everything and started again. This time I looked for alternatives to the three main CS6 applications that I use.

I downloaded GIMP as a replacement for Photoshop. I’ve used it in the past and found it excellent. I downloaded CoffeCup as a Dreamweaver alternative, but it was free for only 30 days, so I tried Brackets, which is open-source software. It looked fine, but I found Phoenix Code as an update to Brackets. That seems to work really well for website creation. So, that’s what I’m using. Then I found Scribus as an alternative to InDesign. It’s different, but it offers much the same facilities – certainly so far. I just need to find some way to open InDesign files (I have lots of indd files) without having a copy of InDesign. I had already bought Kofax Power PDF Advanced, so I installed that on a second machine, and I can manipulate PDF files.

I use a VPN, and I needed to find some way of installing a free VPN. That proved harder, until I found the Opera GX browser, which acts as a VPN. So simple, and free!

I have one game on my work laptop, and I’ve had it for years and years. It’s Pinball Arcade from Microsoft in the 1990s. I could install it on my new laptop and start it, but the actual pinball games wouldn’t run. They’d crash my machine. I tried going through Properties and Compatibility mode with the EXE files, but it didn’t help. My solution was to download Steam and use the free pinball games on that. Not that I play pinball very often.

It did take a week, but I am now up and running, able to do everything I used to, and I have plenty of screen real estate to see what I’m working on. I did try to keep the price down. And I’m very happy. If you have any ideas of what I could have done better, please let me know.


 

Sunday 28 April 2024

Mainframes – the state of the industry

No-one who really knows mainframes thinks that they are going away anytime soon. However, that isn’t the commonly held view by people who don’t really understand what a mainframe is and does. In fact, I have written previously about the importance of mainframers explaining the benefits of using a mainframe to non-mainframe specialists in the IT sphere, and management in general.

Notwithstanding the fact that mainframes and the idea of using mainframes has been under attack since the 1990s, it’s useful to understand what mainframes are doing at the moment and where they are planning to go in the near future.

Let’s start with modernizing the mainframe – a term that pigeon-holes the mainframe as being an anachronistic device little changed during its sixty-year lifespan. Firstly, the mainframe is very modern. It has AI integrated onto its chips. It keeps data secure at rest, in flight, and in use. These are things that would have been little more than science fiction in 1964.

So, is using the cloud a good idea for mainframe sites? The answer, like the curate’s egg, is yes and no – using the cloud is good in parts. There are some things that the cloud does better than the mainframe, and I would suggest that applications or parts of applications that need to use those facilities should use the cloud. There are other things that the mainframe does best. Those applications or parts of applications should stay on the mainframe. There seems to be too much all-or-nothing thinking going on in meetings discussing mainframes and the cloud. You really want to cherry pick the best bits of each platform.

The other big topic – in terms of column inches being written about it – is artificial intelligence (AI). Everyone has heard of AI. Lots of people have tried ChatGPT or Gemini (aka Bard). But few people really get what it means. I now get emails telling me that someone is using AI, for example that podcast platforms are now using Ai. And to the untrained eye that looks like a step into the future – but it’s a future built mainly on ideas taken from Sci Fi movies and TV shows. If I have a medical scan and need someone to interpret the blobs on that scan, then an AI is probably exactly what I want to do it accurately. If I want some kind of artificial generalized intelligence (AGI) then I’m probably going to have to wait. And it’s the AGIs that films and TV usually show – usually as the villain!

For mainframers, AIs trained to do specific tasks are great and are worth investigating further. Hoping for something like HAL in the film 2001 is still a long way off. However, using an AI to hack a mainframe is possible because it is a task that an AI can be trained to do successfully. And that means a ransomware as a service (RaaS) AI could be made available to anyone who could pay for it and had a grudge against an organization or who simply wanted to see what would happen. Like every new technology, Ai can be used for good or evil. It certainly makes the idea of using AI to protect the mainframe a very important project to initiate. When I say protect, what I’m really talking about is suspending jobs or users that seem to be doing something untoward, and alerting the security team. The team can investigate further and either resume the job because it is OK, or take further action against the would-be hackers.

Thirdly, with so many mainframers looking to retire in the near future, or who have already retired and are working as consultants a few days a week, there needs to be a way to keep mainframe expertise within an organization. I think there are two main strands to solving this problem. Firstly, mainframe education is needed to help people understand how mainframes work. IBM recently introduced its Mainframe Skills Council as way to close the mainframe generational skills gap. It’s not just IBM, the Skills Council includes Academic Mainframe Consortium, Albany State University, Broadcom, DNB Bank, HOGENT, M&T Bank, Northern Illinois University, Rocket Software, SHARE, and 21CS.

And if enough people can’t be trained to become mainframers, the second approach is to use open-source software (OSS) on the mainframe. Now I know that warning bells sound for some people whenever I mention open-source software and mainframes in the same sentence, however, there is some really good software available that can run on a mainframe, eg Zowe and Ansible.

The reason that an organization would want to use OSS on a mainframe is because there are plenty of people who are familiar with using it on non-mainframe platforms. They can bring their software expertise and use it on the mainframe without needing to know the arcane secrets of the mainframe. In addition, IBM is completely onboard with the idea of using OSS and, in many cases, has plenty of examples of how to use them on a mainframe, making life much easier to use for mainframe newbies with expertise in a particular piece of OSS.

The other thing about mainframes is that they are adaptable and can embrace almost any form of technology that is available on other platforms. Obviously, having Linux as a component of the mainframe helps. Things like blockchain for financial services can be used on a mainframe. And there are plenty of other technologies like that. Things that in April 1964, when the first S/360 mainframe was announced, would have seemed magical or impossible.

And, of course, the AI mentioned above can be used to fill the widening mainframe skills gap. At this stage, with AI, there will always need to be some people around because, as I’m sure we’ve all found at some time or another, there will be circumstances that you haven’t been trained to deal with. That’s where, seemingly clever, AIs currently find their limit. I’m sure in future that this won’t be the case, but in the next five years, it probably will.

A summary of the current state of the mainframe industry is that mainframes are powerful computing platforms that are not only keeping up with the times but are also leading in many ways. They do, however, have a bad reputation amongst some groups of people as being old-fashioned and out of date. This is something that all mainframers need to address, because mainframes aren’t going away any time soon.

Sunday 21 April 2024

Happy birthday mainframe

7 April marked the 60th anniversary of the mainframe. It was on that day in 1964 that the System/360 was announced and the modern mainframe was born. IBM’s Big Iron, as it came to be called, took a big step ahead of the rest of the BUNCH (Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell). The big leap of imagination was to have software that was architecturally compatible across the entire System/360 line.

It was called System/360 to indicate that this new system would handle every need of every user in the business and scientific worlds because it covered all 360 degrees of the compass. That was a triumph for the marketing team because it would have otherwise been called the rather dull System 500. System/360 could emulate IBM’s older 1401 machines, which encouraged customers to upgrade. Famous names among its designers are Gene Amdahl, Bob Evans, Fred Brooks, and Gerrit Blaauw. Gene Amdahl later created a plug-compatible mainframe manufacturing company – Amdahl.

IBM received plenty of orders and the first mainframe was delivered to Globe Exploration Co. in April 1965. Launching and producing the System/360 cost more than $5 billion, making it the largest privately-financed commercial project up to that time. It was a risky enterprise, but one that worked. From 1965 to 1970, IBM’s revenues went up from $3.6 billion to $7.5 billion; and the number of IBM computer systems installed anywhere tripled from 11,000 to 35,000.

The Model 145 was the first IBM computer to have its main memory made entirely of monolithic circuits. It used silicon memory chips, rather than the older magnetic core technology.

In 1970, the System/370 was introduced. The marketing said that the System/360 was for the 1960s; for the 1970s you needed a System/370. All thoughts of compass points had gone by then. IBM’s revenues went up to $75 billion and employee numbers grew from 120,000 to 269,000, and, at times, customers had a two-year wait to get their hands on a new mainframe.

1979 saw the introduction of the 4341, which was 26 times faster than the System/360 Model 30. The 1980s didn’t have a System/380. But in 1990, the System/390 Model 190 was introduced. This was 353 times faster than the System/360 Model 30.

1985 saw the introduction of the Enterprise System/3090, which had over one-million-bit memory chips and came with Thermal Conduction Modules to speed chip-to-chip communication times. Some machines had a Vector Facility, which made them faster. It replaced the ES/3080.

The 1990s weren’t a good time for mainframes. For example, in March 1991, Stewart Alsop stated: “I predict that the last mainframe will be unplugged on March 15, 1996”. Not the most successful prediction, but definitely catching the zeitgeist of the time. As mentioned above, it was the decade of the System/390. We saw the introduction of high-speed fibre optic mainframe channel architecture Enterprise System Connection (ESCON).

The System/360 gave us 24-bit addressing (32-bit architecture) and virtual storage. The System/370 gave us multi-processor support and then extended storage 24-bit/31-bit addressing. With System/390 we got the OS/390 operating system. As we moved into the 2000s, we got zSeries (zArchitecture) and z operating systems giving us 24, 31, and 64-bit addressing. In 2003, the z990 was described as, “the world’s most sophisticated server”. In 2005 we got the zIIP specialty engine. In 2008 it was the z10 EC with high capacity/performance (quad core CPU chip). In 2012, the zEC12 was described as an integrated platform for cloud computing, with integrated OLTP and data warehousing. In 2000 IBM said it would support Linux on the mainframe, and, by 2009, 70 of IBM’s top 100 mainframe customers were estimated to be running Linux. A zEnterprise mainframe could run 100,000 virtual Linux servers.

In terms of operating systems, OS/360 was replaced by MVT, which became OS/VS2 SVS, and then OS/VS2 MVS. That became MVS/SE, which became MVS/SP, which became MVS/XA and then MVS/ESA before becoming OS/390 and finally z/OS.

Mainframes were once islands of technology excellence. Now they are no longer islands, but integrated in so many way. They work happily with cloud environments to ensure the best of cloud can be integrated with the best of mainframes. They run applications that previously were thought of as only found on distributed systems, such as Java and C++. Open-source software, like Zowe, makes it easier for non-mainframers to be able to successfully work on mainframes. The growth of AI on the mainframe makes them even more capable. There’s no problem connecting parts of mainframe applications to parts of applications running on almost any other platform using APIs to create a new and better application program for users.

The z16 and z16 rack-mounted variant mainframes with their Telum chips have on-chip AI accelerators, which delivers latency-optimized inferencing, ideal for mission-critical workloads such as credit card, healthcare, and financial transactions. The z16s are also specifically designed to help protect against near-future threats that might be used to crack today’s encryption technologies.

Mainframes may be misunderstood by people who don’t know them. They may be perceived as antiquated because they are 60 years old. But just like cars and planes over the last 60 years, mainframes have hugely improved over time. And, although they will still run applications written in the 1960s, they can do so much more now. And mainframes are constantly being updated to meet the needs of the time.

I predict that we will be celebrating the mainframe’s 70th birthday and its 80th birthdays, though we may not be able to imagine how compact it will be by then and what new capabilities it will have.

Happy (belated) birthday mainframe.