Sunday 19 December 2021

Which is better – the mainframe or AWS?


We’ve been talking about hybrid cloud and mainframes for a while now. In fact, IBM’s “Cost of a Data Breach Report” 2021 found that although the average cost of a breach increased to $4.24 million, for hybrid cloud users, a breach cost was less at $3.61 million. IBM is very keen on hybrid cloud. You remember they bought Red Hat, and their whole thrust is to get mainframe sites to use the Red Hat OpenShift hybrid cloud container platform, which allows users to develop and consume cloud services anywhere and from any cloud. They also provide IBM Cloud Pak solutions, their AI-infused software portfolio that runs on Red Hat OpenShift

The idea of a mainframe as being an isolated silo of green screen technology has been a thing of the past for a long time now. It started with people Web-enabling CICS applications, and has gone on from there. Mainframes can quite easily take part in the API economy and very often do. They use the same RESTful connectivity as developers on mobile phones. In addition, mainframes are now happily running open-source software such as Zowe, and much else that makes it easier for non-mainframe-trained IT professionals to work on a mainframe as easily as any other platform.

The bottom line is that mainframes have continued to re-invent themselves over the past 50 plus years in much the same way that aeroplanes and cars have. And, a mainframe is in many ways closer to a Formula 1 racing car than some down-at-heel vintage vehicle. Mainframes offer the best security of any platform with pervasive encryption and data passports. They can happily talk to SIEMs and much else running on distributed platforms. And they can communicate easily with applications running in the cloud. So, mainframers don’t only understand how mainframes work, they have a great appreciation for other platforms and understand that applications should run on the most appropriate platform available.

So, with the constant enhancements to mainframes, and with more expected in 2022 with the new Telum processors, it’s always a bit of an irritation to find non-mainframers using the word ‘modernization’ when they simply mean changing platform – or migration. The two words are not synonymous!

Anyway, Amazon’s used its AWS re:Invent conference at the end of November to announce a new platform called “AWS Mainframe Modernization”. This, they say, will help AWS customers get off their mainframes “as fast as they possibly can” in order to take better advantage of the cloud. Amazon suggests that it can cut the time it takes to move mainframe workloads to the cloud by as much as two-thirds, using its set of development, test, and deployment tools plus a mainframe-compatible runtime environment.

Their solution also helps customers (gosh, I was going to write victims!) assess and analyse how ready their mainframe applications are to be modernized. Amazon reckons that mainframe sites will either lift and shift their application to the new platform, or they could break down their applications into microservices. In this case, the mainframe workloads will be transformed into Java-based cloud services. Whatever route they choose, changing platform is a complex process requiring access to the original source code, working out application dependencies before even starting to recompile the code on the new platform and testing that it still works.

For ‘lift-and-shifters’, the Mainframe Modernization solution offers compilers to convert code as well as testing services to make sure that all the necessary functionality is retained on the new platform. For sites choosing the microservices or refactoring route, and if the components could be run in EC2, in containers, or in Lambda, the Mainframe Modernization solution can automatically convert the COBOL code to Java. There’s a Migration Hub, which allows customers track their migration progress across multiple AWS Partners and solutions from a single location.

The service includes a runtime environment on EC2, with certain configurations of compute, memory, and storage. Amazon boasts that the system is agile and cost-efficient (with on-demand, pay-as-you-go resources, load balancing, and auto-scaling) offering security and high availability, scalability, and elasticity.

It seems that systems integrators will do much of the hard work when it comes to moving applications to the cloud. They will do this using the components provided by AWS.

Adam Selipsy, CEO of AWS, said that AWS is seeking to make the cloud cost effective “for every workload” including IBM System z workloads. He went on to say: “Mainframes are expensive. They’re complicated. And there are fewer-and-fewer people who are learning to program COBOL these days. This is why many of our customers are trying to get off of their mainframes as fast as they possibly can to gain the agility and the elasticity of the cloud.”

Of course all this was before the big AWS outages on 15 December and 7 December.

I can see that the cloud has a lot of appeal. And there are now plenty of people with some cloud experience who prefer to work in a cloud environment. I’m also sure that there are some mainframe sites that would probably find it more economic to use a different platform. What I’m also sure about is that most mainframe sites won’t find any advantage in moving wholesale into the cloud. Their mainframes provide so many security advantages that they are better off retaining the platform. I’m also sure that some workloads could run partly or wholly in the cloud. I’m just not a throw-the-baby-out-with-the-bath-water-type of thinker. I don’t believe moving to the cloud is ‘modernization’, it’s just migration.

I also have this nagging worry about cloud security. What if, in 2021, there had been a massive breach in cloud-based applications? Who would know? What benefit is there to any organization to reveal that they have been breached, particularly if it was a breach of their applications in the cloud?

Good luck to AWS and any mainframe sites that do decide to migrate completely off the platform. But for most organizations, I still think a third way is best, which is the hybrid approach.

 

And, if you celebrate it, merry Christmas. I’ll be back in 2022!

Sunday 12 December 2021

Gartner’s strategic technology trends for 2022

Every year, Gartner predicts what will be the strategic technology trends for the year ahead and beyond. I thought that it would be interesting to see what they are predicting and whether those trends will be seem by mainframe users.

They say that automation is a critical ingredient for digital transformation. Hyperautomation provides a faster path to identifying, vetting, and automating processes across the enterprise. Organizations should focus on improving work quality, hastening the pace of business processes, and fostering nimbleness in decision making. We’ve had the introduction of the idea of a Site Reliability Engineer (SRE), who will spend half their time on developing new features, scaling, and automation. The other half of their time will be spent on operator-type tasks. And, of course, we have seen the growth in the use of Ansible playbooks. And there’s RPA (robotic process automation) to perform mundane tasks.

Generative Artificial Intelligence (AI) has seen an increase in interest and investment recently. Generative AI references algorithms that enable using existing content like audio files, images, or text to create new content. According to Gartner, in the next three and a half years, generative AI will account for 10% of all data produced compared to less than 1% at present. It will be used to support software development more generally, to assist companies in finding candidates to fill talent shortfalls, and to identify drug candidates more readily.

Data Fabric, which Gartner defines as a design concept that serves as an integrated layer (fabric) of data and connecting processes, will foster resilient and flexible integration of data across business users and platforms. This, they predict, will reduce data management efforts substantially while dramatically improving time to value.

Once an AI model is put in place, its value begins to drift as data inputs, real world conditions, and the economic environment change. To achieve lasting value from AI investments, companies will need to take an integrated approach for operationalizing AI models, or AI engineering. Gartner estimates that companies that fully adopt AI engineering will reap three times more value from their AI efforts.

The next stage in automation will be where a physical or software system is capable of self-managing. This is what they call autonomic systems, and they will be more apparent later in the decade. Gartner suggests that autonomic systems with in-built self-learning can dynamically optimize performance, protect companies in hostile environments, and make sure that they’re constantly dealing with new challenges, The software will have greater levels of self-management.

Decision Intelligence (DI) is automation software that enhances human intelligence. It models decisions in a repeatable way to make them more efficient and to hasten the speed to value. Gartner thinks that in the next two years, one-third of large enterprises will use DI for better and more structured decision making.

If you break down an application into the functional blocks, those blocks can be decoupled from the overall applications. These blocks can then be used to create new applications. I guess it’s the same sort of thinking as the API economy. This is what Gartner calls composable applications. According to Gartner, companies that leverage composable applications can outpace their competition by 80% regarding new feature implementation.

Gartner predicts that cloud-native platforms (CNPs) that leverage cloud technology’s essence to offer IT-related capabilities as a service for technologists, will provide the foundation for most new digital initiatives by mid-decade.

Privacy-enhancing computation (PEC) is expected to protect a company and its customers’ sensitive data, which, Gartner thinks, will maintain customer loyalty by decreasing privacy-related issues and cybersecurity events. Gartner goes on to suggest that roughly 60% of large enterprises will leverage these practices by 2025.

Cybersecurity mesh architecture (CSMA) is an integrated approach to securing IT assets regardless of their location. It redefines the perimeters of cybersecurity to the identity of a person or a thing. Gartner predicts that this will reduce the financial implications of cyber incidents by 90% in less than two years.

The distributed enterprise model allows employees to be geographically dispersed, making it possible to employ talented staff based anywhere. Gartner thinks that organizations using this model will achieve 25% faster revenue growth than companies that don’t. Following the pandemic and lockdowns, many companies must already be part of the way towards this model.

Total experience (TX) shows the value of improving every stakeholder’s experience – that’s customers, employees, and users. This, Gartner suggests, will improve business outcomes. They also warn that currently-existing silos need to be broken down.

It will be interesting to see how many of those become part of the new normal.

Sunday 5 December 2021

10 ways hackers pressure your company to pay the ransom

With nation states and criminal gangs using ransomware to attack companies, it’s no surprise that these bad actors have upped their game when it comes to persuading organizations to pay the ransom they are demanding. A new report from Sophos looks at 10 different techniques that these bad can utilize in order to persuade their victims to pay up. You can read all about it here.

Let’s take a look at some the techniques listed in the report – techniques that go beyond simply encrypting an organization’s data and corrupting their backups. Things that any organization needs to be aware of.

Perhaps not that new, the first trick in the book is to make a copy of a company’s data available on the dark web (or anywhere else, come to that) unless the company pays the ransom. They may even auction the data if they think it is that valuable. This pressure can make it difficult for sites not to pay up, even if they did find a backup copy that they could use. This avoids the embarrassment, the loss of customers and reputation, and even legal repercussions, if the data were to be made public. Of course, you are dealing with criminals, so there’s no reason they will keep their end of the bargain.

Similarly, these bad actors may contact employees and senior management, letting them know that their personal data has not only been stolen, but also may be auctioned online unless their ransom demands are met. These employees will pressure the organization to pay the ransom.

If that doesn’t put enough pressure on an organization to pay the ransom, the next strategy is for the hackers to contact business partners, customers, the media, and other people. Basically, these people receive an email or text using the contact details that come from the hack. They’re informed that unless the hacked company pays the ransom, their personal details will be up for sale. This, the hackers hope, will encourage the company to pay the ransom.

Not surprisingly, and in a similar manner to when people are held for ransom, the bad actors will warn the hacked organization not to contact law enforcement. The hackers fear that once the police are involved, they may help the company resolve the hack without paying the ransom. It will also draw the attention of the police to the bad actors and their work.

The threat from insiders has become better recognized in the past couple of years. Criminal gangs may well convince employees with a drug or gambling habit, in exchange for the money they owe the gang, to help the gang infiltrate the organization. Similarly, the hackers may use disgruntled employees to break into a network.

Another technique, once the hackers are inside an organization is to create a new domain admin account. Once that’s been achieved, the passwords for the other admin accounts are reset. As a consequence, the real IT administrators can’t log in to the network to fix the system. They’re only option is to set up a new domain and then try to restore from backups (if available).

Some hackers have used phishing attacks to get control of employees’ email, and then email IT, legal, and security teams to warn of further attacks in the future if the ransom isn’t paid.

Hackers may delete backups or even uninstall backup software. According to Sophos, at one company, a compromised admin account was used to contact the host of the victim's online backups and they were told to delete the offsite backups.

Hackers have also printed physical copies of their ransom note on all connected devices, including point of sale terminals. Apart from the nuisance value, and the waste of paper involved, this can be upsetting for office staff.

Lastly, the bad actors may launch distributed denial of service (DDoS) attacks in the event that the ransom negotiations have stalled. This, the hackers hope, will convince their victim to restart negotiations. DDoS attacks can also be used as ways to keep IT security resources busy while the actual ransomware attack is taking place.

Sophos also gave some thought to what can be done to defend against ransomware attacks. What they suggest is:

  • Implement an employee awareness program that includes examples of the kind of emails and calls attackers use and the demands they might make.
  • Establish a 24/7 contact point for employees, so they can report any approaches claiming to be from attackers and receive any support they need.
  • Introduce measures to identify potential malicious insider activity, such as employees trying to access unauthorized accounts or content.

Their other suggestions for security against various cyberthreats, including ransomware include:

  • Monitor network security 24/7 and watch out for early signs of an attack
  • Shut down Internet-facing remote desktop protocol (RDP) to prevent hackers accessing the network. If users need access to RDP, put it behind a VPN or zero-trust network access connection and use Multi-Factor Authentication (MFA).
  • Use robust security policies.
  • Keep regular backups (at least one copy offline) and practice restores.
  • Prevent attackers from getting access to and disabling security: choose a solution with a cloud-hosted management console with MFA enabled and Role Based Administration to limit access rights.
  • A layered, defence-in-depth security model is essential.
  • Have an effective incident response plan in place and update it as needed. Turn to external experts to monitor threats or to respond to emergency incidents for additional help, if needed.

The excellent suggestions are clearly aimed at distributed systems, but there are still some things that mainframers can learn from this.