Sunday, 11 April 2021

I’m in a meeting!!

 It was bad enough when we were all in the office (or the machine room). There were too many meetings to go to. But now we have Zoom and Teams and any number of other ways of meeting, it seems the amount of time people spend in meetings is just increasing. Last week, for the first time, I found myself in two important meetings at the same time – one using Zoom and one using Teams. This is total madness!

What types of meeting are you spending so much time in? There are lots of ways of classifying meetings. Let’s divide them into six types:

  • Status update meetings – these are the most common, and happen frequently. They are used for project updates, team alignment, and general catch-ups.
  • Information sharing meetings – these may involve presentations as information is passed to a team. It allows questions to be asked by staff. It may involve a training session.
  • Decision making meetings – this is where goals can be set and solutions to problems can be worked out and evaluated. Information needs to be shared, strategies can be discussed, and actions can be decided on.
  • Problem solving meetings – these need to be solution focused and deal with internal or external challenges.
  • Innovation meetings – these allow new ideas to be suggested and the meetings help drive innovation. They may involve brainstorming sessions.
  • Team building meetings – in pre-Covid days, these may have involved away days and team building exercises.

Working from home or working from anywhere was meant to make people more productive because they didn’t need to commute, and they were less likely to be disturbed by work colleagues stopping by their desk for a chat. However, statistics show that in 2020 the number of meetings attended by a worker on average rose by 13.5 percent. Frighteningly, 11 million meetings are held each day, which works out at 55 million meetings per week or 220 million meetings per year! Currently, 15 percent of an organization’s time is spent in meetings, and that figure has increased every year since 2008. Apparently, employees spend 4 hours per week, preparing for status update meetings. And the consequence is that 67 percent of employees complain that spending too much time in meetings hinders them from being productive at work.

It gets worse, most employees attend 62 meetings per month, and feel that half of those meetings were a complete waste of time. And 92 percent of employees say they multitask during meetings – which may help them be more productive, but also may contribute to the failure of the meeting.

Managers and professionals lose 30 percent of their time in meetings that they could have invested in other productive tasks. Ineffective meetings make professionals lose 31 hours every month, or 4 working days. 95 percent of meeting attendees say they lose focus and miss parts of the meeting, while 39 percent confess to dozing off during meetings!

A survey of 6,500 people from the USA, UK, and Germany found that among the 19 million meetings that were observed, the ineffective meetings cost up to $399 billion in the USA and $58 billion in the UK.

These statistics are from the Atlassian, Attentiv, Cleverism, Condeco, Doodle, Harvard Business Review, HR Digest, KornFerry, National Bureau of Economic Research, ReadyTalk. The Muse, and Timely.

There I was, monitoring a Zoom meeting and a Teams meeting, and the question that came to mind was could I have done it for two Teams meetings or two Zoom meetings? The answer for Teams would be to join one Teams meeting using the Teams Desktop Application and join the second meeting using the Microsoft Teams Web Application.

With Zoom you can also join multiple meetings at the same time using the Zoom desktop client. You can’t, however, host multiple meetings. You do need to have a Business, Enterprise, or Education Zoom account. And you have to contact Zoom Support to have this feature enabled, which could take a few days. And, once the setting is enabled, you can join multiple meetings by using the join URL or going to https://zoom.us/join and typing in the meeting ID. The Join button in the Zoom client only works for the first meeting you want to join.

If you really want to do this, here are the instructions…

  1. Sign in to the Zoom web portal.
  2. In the navigation panel, click Settings.
  3.  Click the Meeting tab.
  4. Under the In Meeting (Basic) section, verify that Join different meetings simultaneously on desktop is enabled. 
  5. If the setting is disabled, click the Status toggle to enable it. If a verification dialog displays, choose Turn On to verify the change.

On the day that you want to join multiple meetings, you can join the first meeting by:

  • Clicking the Join button in the Zoom desktop client;
  • Clicking the join URL; or
  • Navigating to https://zoom.us/join and enter the meeting ID.

For meetings two and three (or more), you have to use the join URL in your browser or manually enter the meeting/webinar ID on https://zoom.us/join, and the Zoom client will automatically launch the additional meeting.

And there you are, unproductive in two or three meetings at the same time!

One reason that so many meetings go on for so long is that everyone is comfortable. They have a tea or a coffee. They may have some biscuits or a doughnut to nibble on. And they are sitting in a comfortable chair. There’s no need for them to rush. And that’s why meetings held with people standing up can be so much quicker and can focus people’s attention. Scrums, as people using the agile framework call them. Although they were originally used for developing software, they are now used by many organizations. A small group of people stand in a room – or on a Zoom call – for a limited period of time. This is often 10 or 15 minutes. What’s been achieved can be reviewed, and what needs to be done can be focused on. And these brief meeting are held frequently, often at the start of the day. And this seems to work well.

I’m inclined to not call a meeting if there isn’t a real purpose for having. You know, it’s always scheduled for the second Tuesday of the month kind of meeting. I think it’s important for the chair to keep the meeting focused. The worse kind of meeting is the one where the chair has to talk at length about everything! And I like the idea of standing up at meetings to encourage everyone to be brief and concise, and focused. And I really don’t want to be in two (or more) meetings at the same time again – even if I know how to do it!

 

Sunday, 28 March 2021

How secure is working from anywhere?

 
As the pandemic passes the year mark, and people have been working from home or wherever they can, the big question is: how are organizations dealing with the many new security issues brought about by supporting a remote workforce? What are the priorities for protecting the network and data? What are the best strategies for protecting this expanded attack surface and the loss of the traditional network perimeter? To find out, Nucleus Cyber/archTIS commissioned Cybersecurity Insiders to conduct a survey of security professionals. The report entitled “The 2021 State of Remote Work Security”, tells us what they found.

Perhaps, not surprisingly, the majority of those surveyed (86%) said they intended to continue supporting their remote workforce even after the pandemic is officially declared over. However, despite this large proportion, three-quarters of respondents noted that they still had serious concerns regarding the security risks of their remote workforce.

In addition, they found that the applications that organizations are most concerned with securing include, file sharing (68%), the web (47%), video conferencing (45%), and messaging (35%). More than half of organizations see remote work environments having an impact on their compliance posture (70%). GDPR tops the list of compliance mandates (51%). Organizations prioritize human-centric visibility into remote employee activity (34%), followed by next-generation antivirus and endpoint detection and response (23%), improved network analysis and next-gen firewalls (22%), and Zero Trust Network Access (19%).

Let’s have a look at their findings in more detail.

Network access (69%) tops the list of security concerns when it comes to securing remote employees. Bring Your Own Devices (BYOD) and personal devices (60%), applications (56%), and managed devices (51%) are also a concern for a majority of organizations.

The applications that organizations are most concerned with securing include file sharing (68%), the web (47%), video conferencing (45%), and messaging (35%). This is not surprising because these are fundamental business applications that all organizations rely on for a productive workforce.

Security breaches at the endpoints are a source of concern for many organizations as they look to secure their corporate assets. Therefore, it is no surprise that organizations are most concerned with exposure to malware or phishing risks (39%) followed by protection of data, especially when accessed by unmanaged endpoints (36%).

The biggest security concerns due to the shift in the numbers of remote workers include data leaking through endpoints (68%), users connecting with unmanaged devices (59%), and access from outside the perimeter (56%). This is followed by maintaining compliance with regulatory requirements (45%), remote access to core business apps (42%), and loss of visibility of user activity (42%).

Key security challenges cited include user awareness and training (57%), home/public WiFi network security (52%), and sensitive data leaving the perimeter (46%).

The main reasons that make remote work less secure are: users start to mix personal use and corporate use on their work laptops, increasing the risk of drive-by-downloads (61%); users are more susceptible to phishing attacks at home (50%); the organization no longer has visibility since most remote workers operate outside the corporate network (38%); and users that are furloughed pose an increased risk of data theft (25%).

Just about three-quarter of organizations see remote work environments having an impact on their compliance posture (70%). GDPR tops the list of compliance mandates (51%).

When organizations were asked about security controls, most are using a variety of security controls to protect remote work scenarios. A majority of respondents (80%) use antivirus/anti-malware. Other results for use were: firewalls (72%), virtual private networks (70%), multi-factor authentication (61%), endpoint detection and response (56%), and anti-phishing (54%), among others.

Respondents were asked to rank the importance of different cyber technologies to protect their organization from these threat vectors? The survey found that organizations prioritize human-centric visibility into remote employee activity (34%), followed by next-generation anti-virus and endpoint detection and response (23%), improved network analysis and next-gen firewalls (22%), and zero trust network access (19%).

This report is based on the results of a comprehensive online survey of 287 IT and cybersecurity professionals in the US, conducted in January 2021, to identify the latest enterprise adoption trends, challenges, gaps, and solution preferences for remote work security. The respondents range from technical executives to IT security practitioners, representing a balanced cross-section of organizations of varying sizes across multiple industries.

It's a really interesting report. I did find myself wondering what else organizations should be worrying about with their employees working from home. There’s always the problem of employees using risky apps that they might have downloaded and software that they may have unwillingly downloaded after visiting high-risk web sites. There’s also the issue of cloud-based attacks, with malware being delivered over cloud applications such as OneDrive for Business, SharePoint, and Google Drive.

Then there are issues with patches to applications and even to the firmware in remote (edge) computers. Centralized IT will be informed when there’s a security update to a piece of software. It then has to find a way to get that update to all the edge computers in its PC fleet – even when those laptops may not be switched on. In addition, these days, malware attacks can be against the firmware in a computer. So, again that will need to be able to be patched remotely.

Having said that, it’s still interesting to see what people are concerned with. You can download a copy from here.

Sunday, 21 March 2021

That new mainframe job

  

So, either it’s time for you to look for a new job, or you’re looking for new mainframe staff where you work. The question is this: what are the most important characteristics about the new job that you should be looking for, or that you should be offering?

The obvious answer, I guess, is salary. How much does the job pay? That’s usually the biggest criteria. If it doesn’t pay enough, or just more than you’re getting now, then there’s no point applying for it. However, research has found that a higher salary doesn’t make you happier – or not very much happier. The truth is that at the start of your career money is important because you need to pay for things. But once you get enough money to pay the rent/mortgage, food, clothes, holiday, and a bit to spare, then an increase brings less happiness than you might expect. And by the time that you need a pay rise to pay for your second yacht, the increase in pay brings almost no increase in happiness at all.

The second big thing that people like is a good job title. They like to be a senior something, or principal something. Vice-president of something is also pretty good. But really, in most companies, the person who is doing most of the work is the person who doesn’t really have a job title. They are the people who are keeping the company going – without whom, business wouldn’t be as successful as it is. In my experience, people with very long job titles tended to be the most ineffectual at their job. Following the Peter Principle, they had been promoted above their level of competence and were now transferred sideways into a role where they could do little harm. If people are impressed by job titles, they are probably not the sort of people you want working on your mainframe!

So, what should you be looking for in your new job, or what should be near the top of your advert? One answer is work-life balance. Being able to get home to see your child’s performance in the school play, or watch them playing in the school football team is very important for them. So, it’s important for you to be able to schedule your working day around those events. Find a job where you can take a couple of hours off in the afternoon and make up the time in the evening. You’ll be amazed at how much happier you and your family will be.

Training or CPDs are important. How often at conferences did you hear people say that they could only attend for one day because there was no money in the budget for an overnight stay, or to pay for them to attend for all three days. It’s true that there are lots of brilliant online training courses available, but there’s something extra you can get from attending in person for a training course – and I don’t mean the beer that gets drunk in the bar in the evening! Staying up-to-date with the latest mainframe technologies is a very important part of job satisfaction. And finding out what other mainframe sites are doing or planning to do is important too.

Health is important to a person’s happiness. You need to ensure that your new job values your (and your family’s health). So, if you do need some kind of treatment, it is accepted as part of what happens to people in your job.

And that leads on to culture. There was a time, many years ago, when the mainframe employees – operators, systems programmers, DBAs, etc – didn’t really know which company they worked for, they just knew that they worked in IT. For them, they could change to a similar job at a different company, and it would hardly make any difference to them. Hopefully that’s not the case anymore. A company’s culture can be hard to define in words – although most companies do document what they believe/hope their culture is – but employees know whether they like to work there and whether they’d recommend it to their friends. Or whether they simply work there because it pays them and it’s not too far away.

In fact, commute time is one of the biggest things for how happy people are with their job. And, of course, commute time is not just about the distance travelled. If you can get to work in 20 minutes or under, you have the perfect job for commute time. As your commute time increases, this reduces a person’s happiness with their job. This may change as more-and-more people work from home or work from anywhere because the commute into the office may only occur once a week or couple of times a month once the pandemic is over.

Increasingly, people are concerned about how ‘green’ they and the company they work for are. Is this new organization you’re looking to work for carbon neutral? Does it offer hybrid cars for staff to use? Does it have charging points in the car park for staff? And there are many other related questions that can be asked. If the company believes global warming is a myth, is this a company with much of a future. And it’s the same with your current company when you advertise a job. What’s it’s carbon footprint like?

Basically, there’s a lot more to look for when searching for a new job or when recruiting than just salary and job title. So, it’s worth keeping an eye out for these other factors that can make you happy – and make your mainframe staff happy.

Find out more about iTech-Ed Ltd here.

Sunday, 7 March 2021

Tell me about zERT

 

We’ve been talking about securing data at rest and data in transit for a long time. It’s just that data in transit is even more important these days as more-and-more information is transferred and the mainframe is an important network hub, and ensuring it is appropriately secure becomes ever more important.

With the introduction of the z14, we got the concept of pervasive encryption and the idea that all data could be encrypted no matter where it was. For data in transit, we’re probably familiar with TLS/SSL, SSH and IPSec cryptographic network security protocols, but how do you know their cryptographic status? That’s the question that z/OS® Encryption Readiness Technology (zERT) answers. And this blog is a very brief summary of what zERT can be used to do.

zERT’s raison d’être is to provide its users with intelligent network security discovery and reporting capabilities. And it does this by monitoring TCP and Enterprise Extender connections, and collecting and reporting on the cryptographic security attributes of IPv4 and IPv6 application traffic.

The data it collects is written to SMF in new SMF 119 subtype 11 and 12 records for analysis. There’s also a new real-time Network Management Interface (NMI) service for network management applications to retrieve zERT SMF records as they are generated.

SMF 119 subtype 11 records contain the full of detail of each session. Subtype 12 captures all unique session types between client/server pairs per interval. They both allow users to see what traffic is protected, and if so, what security protocol and version is used.

With client/server pairs, zERT can be used to track connections between each pair of client and server IP addresses, and information collected includes the port number, job, and userid.

Looking in more detail, we can see that the zERT summary records contain connection and throughput counters, including: the total number of connections; the number of partially protected connections (where encryption was not applied during the entire session); and the number of short (shorter than10 second) connections. It’s worth noting that short connections can be significant for TLS, because establishing the session is expensive in terms of CPU, making them an expensive way to run connections.

There are a couple of limitations to zERT. For example, no information is collected non-EE UDP traffic or traffic using other IP protocols. If you want to see a list of what these other protocols are, have a look at https://en.wikipedia.org/wiki/List_of_IP_protocol_numbers. And zERT only collects cryptographic security attributes for TLS, SSL, SSH, and IPSec protocols, not any other cryptographic security protocols.

When zERT is used with recognized cryptographic protection, it can show which cryptographic protocol is being used, who the traffic belongs to, which cryptographic algorithms are used, the length of the cryptographic keys, and other important attributes of the cryptographic protection. This can be used to determine regulatory compliance and, importantly, for see whether any connections are currently using cryptographic protection that is not robust enough and needs to be increased. It can also provide information for auditors and compliance officers.

Of course, zERT does not collect or record the values of keys, initialization vectors, or any other secret values that are exchanged or negotiated during the session.

In terms of the performance impact of zERT, there are a few things to consider. It’s estimated that the performance impact on the TCP/IP stack is quite small, in terms of latency and CPU consumption. On the other hand, the zERT Network Analyzer can affect system CPU consumption because it is a data-intensive application. However, zERT Network Analyzer is a Java application, and it uses Db2 for z/OS as its data store, so, a lot of the processing is zIIP eligible. There’s also zERT aggregation, which can be used to reduce the volume of zERT-generated SMF data in situations where there are workloads with lots of short-lived connections.

zERT looks like a really useful tool from IBM. zERT Discovery collects and records cryptographic information, and zERT Aggregation groups attributes by security session. As a tool, it provides a way for users to get a grip on the overall quality of the cryptographic protection for their z/OS network. The security team can find out whether they have any security exposures. They can see whether any unapproved protection protocols are being used, or even whether there are some cases where no protection is being used on data in transit.

Find out more about iTech-Ed Ltd here.

Sunday, 28 February 2021

Mainframe, Cloud, and the Post-Pandemic IT Landscape


With so much value being locked up in data, which is often held on tape, Model9 wrote in this year’s Arcati Mainframe Yearbook how to leverage and monetize that data using the cloud.

The year ahead, one more step into what promises to be a tumultuous decade for mainframes, is full of opportunities to do groundwork and lay foundations. Our customers tell us two things, loud and clear: They know they need to do more to leverage their vast store of mainframe data and they also continue to feel squeezed by the costs and limitations of traditional storage options.

Put another way, they know the mainframe is a solid, reliable core for their business, but they recognize competing priorities and opportunities, most of which involve leveraging the cloud for agility, faster and better analytics, and potentially more cost-effective storage.

All of these trends were becoming visible in 2019 but the double hammer blows of disruptions resulting from Covid and Covid lockdowns and the secondary economic impacts are powering them to prominence.

While some organizations have found ways to actually eliminate the mainframe, we believe there is still a strong case for most enterprises to have access to a dedicated processing powerhouse to ensure mission-critical activities and align fully with corporate priorities. But when it comes to the data stewardship occurring within the mainframe environment there are two closely related problems.

First and foremost, business units, divisions, and even individual contributors are “voting with their feet” – putting more and more data and more and more analytics in the cloud. It’s easy, simple, and can often produce immediate and valuable business results. But this independence limits the potential for cross-fertilization between the new and the old, the central and the peripheral, and between what’s core and what’s potentially just short-term. It is past time to relink those worlds – enhancing data quality for both and making mainframe even more relevant.

Second, historic dependence on tape for long-term and archive storage as well as backup and recovery results in heavy, ongoing investments in hardware, software, and people to manage them. And, it ensures that those petabytes of data, not to mention active data that is similarly walled-off by its proprietary forms and formats, is also difficult to reach by new tools that are in the cloud.

A recent report from leading analyst firm, Gartner, predicts, “by 2025, 35% of data center mainframe storage capacity for backup and archive will be deployed on the cloud to reduce costs and improve agility, which is an increase from less than 5% in 2020.” That’s momentous!

Cloud is no longer an experiment or a theory. It is a substantial part of the world’s information infrastructure. Even more than the changes wrought in the mainframe world by personal computers and client-server computing a generation ago, cloud promises tectonic change – but change that can benefit mainframe operations. Indeed, unlike mid-range computers and PCs in the past, the cloud shouldn’t be perceived as a threat for the mainframe but rather as a platform for integration/collaboration.

Mainframe organizations should consider where they stand with regard to existing technology, what their business needs and business opportunities are, and the available paths forward. The decision making is a step on what must become a journey.

Organizations face a continuing challenge in the stranglehold of tape and proprietary data formats, which are built around sequential read-write cycles that date back to 9-track tape. With this technology, both routine data management tasks as well as more ambitious data movements put an enormous processing drain on the mainframe itself and lead to increased MIPS costs.

The solution is to eliminate the direct costs of acquisition, maintenance, and management and instead to deal directly with data. This involves mastering movement. This can allow data to be used where it is needed and stored most cost effectively. This step can provide immediate dividends, even if you plan to go no farther.

By using modern storage technologies for mainframe that provide flexibility, you are no longer confined to a narrow list of choices. The technology can also provide the connectivity that enables movement between the mainframe and the new storage platform. With this enabler, you can be master of your own data.

The next challenge is achieving real integration of mainframe data with other sources of data in the cloud.

The solution is to take advantage of Extract-Load-Transform (ELT) technology instead of complex, old, slow, compute-intensive ETL approaches. ELT can leverage processing capabilities within the mainframe outside of the CPU (eg zIIP engines) and TCP/IP to rapidly extract mainframe data and load into a target. There, transformation to any desired format can occur economically and flexibly. The net result is more cost effective and generally faster than ETL.

Building on your movement and transformation capability can help you better engage with cloud applications when and where it makes sense. It is an ideal way to move secondary data storage, archive, and even backup data to the cloud, and then transform it to universal formats.

Liberated from the complexities of the mainframe, this data can be the crucial difference between seeing the full business picture and getting the full business insights, or missing out entirely. This data can also provide a powerful addition to a data lake or can be exposed to the latest agile analytical tools – potentially delivering real competitive advantage to the enterprise, at modest cost. And, all this without adversely impacting traditional mainframe operations, since ELT can move data in either direction as needed. Transformation can apply to any type of mainframe data, including VSAM, sequential and partitioned data sets. That data can be converted to standard formats such as JSON and CSV.

Once mainframe data is no longer locked in a silo it can be leveraged and monetized for new business purposes, right in the cloud and in ways not possible in a traditional tape or VTL environment.

The final challenge for some organizations, for example those that evolved to a highly decentralized operational model, is that a central, on-premises mainframe may no longer deliver benefits and may, in fact, add to latency, cost, and inflexibility.

The solution for these organizations is to recognize that data is the only non-negotiable element. If the data is in the cloud already or if you can get it there, you can grow more and more native capability in the cloud, ranging from operational applications to analytics. Replicating or matching traditional mainframe functions with cloud-based functionality is challenging but achievable. As the most substantial step, in a cloud journey, this step is necessarily the most complex, especially when organizations determine to actually rewrite their existing applications for the cloud. But ELT and the liberation of data from mainframe silos can lay the groundwork and provide a solid basis for finding a workable path to move beyond mainframe. In particular, moving historical data from voluminous and expensive tape infrastructure to the cloud permits consideration of a post-mainframe future, if so desired.

Complete migration to the cloud, is not for all organizations. But for some organizations it offers an opportunity to transform and grow in new ways.

By enabling the mainframe to play more effectively in the data business value game, cloud connectivity and cloud capabilities can make the mainframe an even more valuable and sustainable platform for the organization. Indeed, these options have the potential to make the mainframe even more integral to an organization’s future.

You can read the full article from Model9 here.