Sunday, 7 March 2021

Tell me about zERT

 

We’ve been talking about securing data at rest and data in transit for a long time. It’s just that data in transit is even more important these days as more-and-more information is transferred and the mainframe is an important network hub, and ensuring it is appropriately secure becomes ever more important.

With the introduction of the z14, we got the concept of pervasive encryption and the idea that all data could be encrypted no matter where it was. For data in transit, we’re probably familiar with TLS/SSL, SSH and IPSec cryptographic network security protocols, but how do you know their cryptographic status? That’s the question that z/OS® Encryption Readiness Technology (zERT) answers. And this blog is a very brief summary of what zERT can be used to do.

zERT’s raison d’être is to provide its users with intelligent network security discovery and reporting capabilities. And it does this by monitoring TCP and Enterprise Extender connections, and collecting and reporting on the cryptographic security attributes of IPv4 and IPv6 application traffic.

The data it collects is written to SMF in new SMF 119 subtype 11 and 12 records for analysis. There’s also a new real-time Network Management Interface (NMI) service for network management applications to retrieve zERT SMF records as they are generated.

SMF 119 subtype 11 records contain the full of detail of each session. Subtype 12 captures all unique session types between client/server pairs per interval. They both allow users to see what traffic is protected, and if so, what security protocol and version is used.

With client/server pairs, zERT can be used to track connections between each pair of client and server IP addresses, and information collected includes the port number, job, and userid.

Looking in more detail, we can see that the zERT summary records contain connection and throughput counters, including: the total number of connections; the number of partially protected connections (where encryption was not applied during the entire session); and the number of short (shorter than10 second) connections. It’s worth noting that short connections can be significant for TLS, because establishing the session is expensive in terms of CPU, making them an expensive way to run connections.

There are a couple of limitations to zERT. For example, no information is collected non-EE UDP traffic or traffic using other IP protocols. If you want to see a list of what these other protocols are, have a look at https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers. And zERT only collects cryptographic security attributes for TLS, SSL, SSH, and IPSec protocols, not any other cryptographic security protocols.

When zERT is used with recognized cryptographic protection, it can show which cryptographic protocol is being used, who the traffic belongs to, which cryptographic algorithms are used, the length of the cryptographic keys, and other important attributes of the cryptographic protection. This can be used to determine regulatory compliance and, importantly, for see whether any connections are currently using cryptographic protection that is not robust enough and needs to be increased. It can also provide information for auditors and compliance officers.

Of course, zERT does not collect or record the values of keys, initialization vectors, or any other secret values that are exchanged or negotiated during the session.

In terms of the performance impact of zERT, there are a few things to consider. It’s estimated that the performance impact on the TCP/IP stack is quite small, in terms of latency and CPU consumption. On the other hand, the zERT Network Analyzer can affect system CPU consumption because it is a data-intensive application. However, zERT Network Analyzer is a Java application, and it uses Db2 for z/OS as its data store, so, a lot of the processing is zIIP eligible. There’s also zERT aggregation, which can be used to reduce the volume of zERT-generated SMF data in situations where there are workloads with lots of short-lived connections.

zERT looks like a really useful tool from IBM. zERT Discovery collects and records cryptographic information, and zERT Aggregation groups attributes by security session. As a tool, it provides a way for users to get a grip on the overall quality of the cryptographic protection for their z/OS network. The security team can find out whether they have any security exposures. They can see whether any unapproved protection protocols are being used, or even whether there are some cases where no protection is being used on data in transit.

Find out more about iTech-Ed Ltd here.

Sunday, 28 February 2021

Mainframe, Cloud, and the Post-Pandemic IT Landscape


With so much value being locked up in data, which is often held on tape, Model9 wrote in this year’s Arcati Mainframe Yearbook how to leverage and monetize that data using the cloud.

The year ahead, one more step into what promises to be a tumultuous decade for mainframes, is full of opportunities to do groundwork and lay foundations. Our customers tell us two things, loud and clear: They know they need to do more to leverage their vast store of mainframe data and they also continue to feel squeezed by the costs and limitations of traditional storage options.

Put another way, they know the mainframe is a solid, reliable core for their business, but they recognize competing priorities and opportunities, most of which involve leveraging the cloud for agility, faster and better analytics, and potentially more cost-effective storage.

All of these trends were becoming visible in 2019 but the double hammer blows of disruptions resulting from Covid and Covid lockdowns and the secondary economic impacts are powering them to prominence.

While some organizations have found ways to actually eliminate the mainframe, we believe there is still a strong case for most enterprises to have access to a dedicated processing powerhouse to ensure mission-critical activities and align fully with corporate priorities. But when it comes to the data stewardship occurring within the mainframe environment there are two closely related problems.

First and foremost, business units, divisions, and even individual contributors are “voting with their feet” – putting more and more data and more and more analytics in the cloud. It’s easy, simple, and can often produce immediate and valuable business results. But this independence limits the potential for cross-fertilization between the new and the old, the central and the peripheral, and between what’s core and what’s potentially just short-term. It is past time to relink those worlds – enhancing data quality for both and making mainframe even more relevant.

Second, historic dependence on tape for long-term and archive storage as well as backup and recovery results in heavy, ongoing investments in hardware, software, and people to manage them. And, it ensures that those petabytes of data, not to mention active data that is similarly walled-off by its proprietary forms and formats, is also difficult to reach by new tools that are in the cloud.

A recent report from leading analyst firm, Gartner, predicts, “by 2025, 35% of data center mainframe storage capacity for backup and archive will be deployed on the cloud to reduce costs and improve agility, which is an increase from less than 5% in 2020.” That’s momentous!

Cloud is no longer an experiment or a theory. It is a substantial part of the world’s information infrastructure. Even more than the changes wrought in the mainframe world by personal computers and client-server computing a generation ago, cloud promises tectonic change – but change that can benefit mainframe operations. Indeed, unlike mid-range computers and PCs in the past, the cloud shouldn’t be perceived as a threat for the mainframe but rather as a platform for integration/collaboration.

Mainframe organizations should consider where they stand with regard to existing technology, what their business needs and business opportunities are, and the available paths forward. The decision making is a step on what must become a journey.

Organizations face a continuing challenge in the stranglehold of tape and proprietary data formats, which are built around sequential read-write cycles that date back to 9-track tape. With this technology, both routine data management tasks as well as more ambitious data movements put an enormous processing drain on the mainframe itself and lead to increased MIPS costs.

The solution is to eliminate the direct costs of acquisition, maintenance, and management and instead to deal directly with data. This involves mastering movement. This can allow data to be used where it is needed and stored most cost effectively. This step can provide immediate dividends, even if you plan to go no farther.

By using modern storage technologies for mainframe that provide flexibility, you are no longer confined to a narrow list of choices. The technology can also provide the connectivity that enables movement between the mainframe and the new storage platform. With this enabler, you can be master of your own data.

The next challenge is achieving real integration of mainframe data with other sources of data in the cloud.

The solution is to take advantage of Extract-Load-Transform (ELT) technology instead of complex, old, slow, compute-intensive ETL approaches. ELT can leverage processing capabilities within the mainframe outside of the CPU (eg zIIP engines) and TCP/IP to rapidly extract mainframe data and load into a target. There, transformation to any desired format can occur economically and flexibly. The net result is more cost effective and generally faster than ETL.

Building on your movement and transformation capability can help you better engage with cloud applications when and where it makes sense. It is an ideal way to move secondary data storage, archive, and even backup data to the cloud, and then transform it to universal formats.

Liberated from the complexities of the mainframe, this data can be the crucial difference between seeing the full business picture and getting the full business insights, or missing out entirely. This data can also provide a powerful addition to a data lake or can be exposed to the latest agile analytical tools – potentially delivering real competitive advantage to the enterprise, at modest cost. And, all this without adversely impacting traditional mainframe operations, since ELT can move data in either direction as needed. Transformation can apply to any type of mainframe data, including VSAM, sequential and partitioned data sets. That data can be converted to standard formats such as JSON and CSV.

Once mainframe data is no longer locked in a silo it can be leveraged and monetized for new business purposes, right in the cloud and in ways not possible in a traditional tape or VTL environment.

The final challenge for some organizations, for example those that evolved to a highly decentralized operational model, is that a central, on-premises mainframe may no longer deliver benefits and may, in fact, add to latency, cost, and inflexibility.

The solution for these organizations is to recognize that data is the only non-negotiable element. If the data is in the cloud already or if you can get it there, you can grow more and more native capability in the cloud, ranging from operational applications to analytics. Replicating or matching traditional mainframe functions with cloud-based functionality is challenging but achievable. As the most substantial step, in a cloud journey, this step is necessarily the most complex, especially when organizations determine to actually rewrite their existing applications for the cloud. But ELT and the liberation of data from mainframe silos can lay the groundwork and provide a solid basis for finding a workable path to move beyond mainframe. In particular, moving historical data from voluminous and expensive tape infrastructure to the cloud permits consideration of a post-mainframe future, if so desired.

Complete migration to the cloud, is not for all organizations. But for some organizations it offers an opportunity to transform and grow in new ways.

By enabling the mainframe to play more effectively in the data business value game, cloud connectivity and cloud capabilities can make the mainframe an even more valuable and sustainable platform for the organization. Indeed, these options have the potential to make the mainframe even more integral to an organization’s future.

You can read the full article from Model9 here.

Sunday, 21 February 2021

Relationship Advice For You and Your Mainframe


Deborah Carbo, Director, Product Management & Strategy, Broadcom Mainframe Software suggested in this year’s Arcati Mainframe Yearbook that the modern world of enterprise IT is a lot like relationships. The more you put into them, the more you get out of them. And in this relationship, the mainframe is a perfect match!

She confirmed that investing in your mainframe delivers strong ROI and integrating it into your hybrid infrastructure can advance your transformation goals, provide competitive advantage, and deliver on your SLAs. Investing in your mainframe is also an excellent way to ensure you’re prepared for growth. Sure… I’ve heard the detractors. I’ve also seen risky relationships where some fall head over heels for a new tech trend only to find the honeymoon phase fizzle quickly. You know what they say, the grass isn’t always greener...

As with any relationship, you need to (be willing to) give to get. You get an epic increase in value from your mainframe by modernizing it in place, opening it up, and bringing it closer to your front-end digital apps by integrating it with your hybrid cloud. To strengthen your relationship and gain even greater value from your mainframe, here’s the idea:

First – celebrate what you have and amplify the love. You have a tremendous amount of code invested in the mainframe, and you can depend on it. Your mainframe has been with you through thick and thin. While Cloud is an attractive focus that offers tons of new potential, it isn’t the answer to every question. Recognize that your mainframe offers extraordinary value. It delivers transactional scale, data and user protection, and always-on availability – and, yes, it also offers tremendous new potential. Cloud? It’s well suited for horizontal scaling, web and front-end app serving. Starting fresh may sound appealing, but ultimately requires rewriting enormous amounts of good, highly-optimized code with new code that is more generic, unproven, and potentially vulnerable. You’ll never recover that ROI. I’ve heard from customers time and again that modernizing in place is the best way forward. It’s easier, less risky – and burns fewer of your precious resources. Continuing to invest in enduring solutions like the mainframe and working with it enables you to get a substantial return on your relationship investment.

Second – open up to new experiences. Interaction with others is what makes life rich and exciting. Just as healthy relationships don’t exist in a vacuum, thriving IT platforms don’t operate in a silo. There is no need to limit yourself to Cloud OR mainframe. You should see your future as Cloud AND mainframe. When you get the mainframe out of the back office and connect it to your front-end world of apps and mobile devices you unleash all-new strategic value. Now, all your developers can build richer, more powerful digital applications that easily integrate with mainframe processes and data using the cloud-native tools they already know and love. You dramatically expand your talent pool. Suddenly you’re seeing each other in a whole new light.

Today, new open tools and technologies make it easy for the mainframe to open up and interact with others while also increasing security. Leveraging open APIs makes it easy to connect, automate, and combine operational and other data to integrate and manage across the hybrid environment. Your infrastructure is more predictive, efficient, and protected and your DevSec and Ops teams can work with the tools they know and love.

With Zowe, the first open-source project for z/OS, any developer, sys admin, or sys prog can work with the platform just as they would with any other platform or cloud. By using the Zowe API Mediation Layer and a growing list of third-party tools like Service Now, you can speed mainframe development and operations and automate tasks like systems maintenance and software installs.

Imagine a young developer, Jeannie, hired straight out of university to work on the mainframe. She’s excited to discover that an open mainframe with integrations between Endevor and VS Code front ends and integrated CI/CD pipelines can take advantage of modern tools such as Git and Jenkins. Or the same with David, a Systems Programmer who, working with Sysview and Service Now can automate service tickets, freeing him up to work on more strategic projects. In both cases, these mainframers are now using a common toolset that allows them to interact more with front-end teams, allowing them to work more collaboratively with them and share in their culture.

Finally – look to the future and grow together. Especially in a world of cloud and continuous innovation, your mainframe has an important and indispensable role to play. It’s not only the most reliable partner in your IT relationship but allows you to deliver powerful new value to service customers in innovative ways as well. Realize that you make a great team and make the commitment to evolve together to achieve that next phase of growth.

The mainframe is well-known as a workhorse for large transactional and batch workloads. They’re all about high throughput and derive value from the stability, security, and processing speed of the platform. So, they belong close to the data. You get a single source of truth. Better performance, trust and integrity.

But it’s a hybrid world and nothing lives in isolation. When these workloads work together with other cloud-native workloads like your mobile, web and digital front ends you not only ensure you’re delivering but making the most efficient use of your resources.

Exploiting containers on the mainframe now takes the friction out of the hybrid environment and allows you to easily bring applications closer to the data they need for optimal performance. How? Containers let you better componentize apps and expose them. This allows you to more easily consume and deploy parts of those apps in the environment, bringing them closer to the data. From there they can more quickly iterate – including moving components that would traditionally run on the cloud to the mainframe. These apps and processes can then run on the mainframe, shortening the distance between the front ends and legacy code – communicating at memory speed, more secure, more performant, so better ROI again.

When it comes to containers on the mainframe, their capability and exploitation is still maturing. But there is significant excitement around them. They serve as another example of how it’s becoming easier and easier to work with the mainframe and bring the mainframe’s qualities closer to your front-end world.

You can read the full article here, including how you can make a date with Broadcom.