Sunday, 10 November 2024

More on security

Following on from last week’s blog entitled Insider threats and smf, I recently got a press release from application security SaaS company Indusface giving some figures to the problem that organizations are facing from their own employees. It’s not just that there are a very small minority of employees who seem intent on bringing their company down by deleting data or launching ransomware attacks, there also seems to be a huge pool of people who inadvertently give away information, or open malware, or click on ‘dodgy’ links that leave companies wide open to serious attacks by bad actors.

The people at Indusface have used global search data from AHrefs to find the world's top five questions and concerns about cyber security in the workplace. The data from AHrefs, which was correct as of October 2024, can be found here. They have then come up with their own suggested answers to those searches.

I’d like to start with the question that came in fourth place, which was “What percentage of breaches are human error responsible for?” There were similar searches on “Human error cyber security”

Their answer was: “According to data by Indusface, 98% of all cyber-attacks rely on human error or a form of social engineering. Special engineering breaches leverage human error, emotions, and mistakes rather than exploiting technical vulnerabilities. Hackers often use psychological manipulation, which may involve coaxing employees to reveal sensitive information, download malicious software, or unknowingly clicking on harmful links. Unlike traditional cyberattacks that rely on brute force, social engineering requires direct interaction between attacker and victim.

“Given that human error can be a major weak link in cyber security, the best way to prevent these attacks is to put in place education and training on the types of attacks to expect and how to avoid these. That said, implementing a zero-trust architecture, where requests for every resource are vetted against an access policy, will be paramount in stopping attacks from spreading even when a human error results in a breach. Also, make sure that the applications are pen tested for business logic and privilege escalation vulnerabilities so that the damage is minimized.

“Basics such as standard best practices across the board, secure communications, knowing which emails to open, when to raise red flags, and exercising extreme caution when accepting offers will go a long way in preventing human errors that lead to breaches.”

Let’s look at the other search terms in the top five. In first place, with the most searches, was. “Why is cyber security training so important for business?” There were similar searches for “Cyber security for business”.

The answer from Indusface was: “With data breaches costing businesses an average of $4.45 million globally in the last year (according to IBM’s Cost of a Data Breach Report 2024), it raises the question of just how critical it is for organizations to provide employees with comprehensive training on what constitutes sensitive data and how they can protect it, as well as what is at stake if they do not adhere to the policies.

“And training doesn’t have to be monotonous, for example set up phishing email simulators to engage the team and allow them to see the potential dangers in action. These simulations show how quickly and easily attacks can happen, helping employees develop practical, hands-on skills for spotting suspicious activity.

“Cybersecurity threats evolve constantly, so training should be regular, not a one-time event. Regular training and guidance will ensure that employees receive tailored guidance on securing their work equipment, home offices, use of VPNs, and recognizing the unique threats posed by both in-office and home working environments.”

The second most frequent searches were “How is AI used in cyber security?” or simply “Cyber Security AI”.

Indusface said: “The biggest problem with security software, especially website and API protection is the prevalence of false positives. False positives are when legitimate users are prevented from accessing an application. So notorious is this problem that 50%+ of businesses worldwide have implemented Web Application and API Protection/ Web Application Firewall (WAAP/WAF) solutions and left them on log mode. This means that attacks go through the WAF and they are at best used as log analysis tools after a breach.

“Effectively using AI can help with eliminating or reducing false positives to a bare minimum and encourage more businesses to deploy WAFs in block mode.

“The other problem with security software is letting an attack go through. These are also called false negatives. Using AI on past user behaviour and attack logs can effectively prevent any attacks that don’t conform to typical user behaviour.”

Third in their list was “How can you protect your home computer?” and “Home cyber security”. They suggest that by 2025, according to a Forbes’ article, approximately 22% of workers will work remotely. They go on to ask, with such a significant increase in remote roles, how can employers ensure their employees' home computer remains protected?

Their answer was: “Remote working means people are working in less secure environments and their devices are more exposed to data breaches both digitally and physically. Many remote workers are using the same device for professional and personal use, or even accessing company data on devices shared with other household members.

“Employers should ensure strong password management, including using automatic password generators that create extra secure passwords, and never duplicate these across accounts. Multi-factor authentication also provides a secure method of verifying your identity, making it harder for hackers to breach any accounts. Limiting what could be accessed on official devices is also important in thwarting attacks.

“That said, installing endpoint security software like antivirus, and keeping it updated, should be enough to protect most computers, unless you fall victim to an advanced phishing attack.”

The fifth most popular searches were, “What are the top 3 targeted industries for cyber-attacks?” and “Top industries cyber-attack”.

Here’s what Indusface said: “According to EC University, manufacturing, professional / business, and healthcare are the top 3 targeted industries.

“The manufacturing sector leads the world in cybercrime incidents according to Statista (2023). Attacks on the industry range from halting production lines, to the theft of intellectual property, and compromising the integrity of supply chains.

“The professional, business, and consumer services sector has also become an attractive target for cybercriminals due to its heavy reliance on sensitive data. Confidential client information and business insights are often targeted, leading to significant financial losses and damage to brand reputation, and client relationships.

“A breach in the healthcare industry can have dire consequences, from compromising sensitive patient data to disrupting critical medical services. Given the high value of medical records on the black market, there is an urgent need for stronger cybersecurity measures to protect both patient privacy and the integrity of healthcare systems.”

I thought it was useful to get another view on the ongoing issue of keeping your mainframe – and any other platforms your organization supports – safe from breaches. And keeping your employees alert at all times to potential threats.

Sunday, 3 November 2024

Insider threats and SMF

Many people think that SMF records will tell you everything that has happened at a site. And, if you link it to some kind of alerting software, it will act as the cornerstone of your mainframe’s security. And that, as they sleep snuggly in their beds at night, is their mainframe security done and dusted.

Many people think that all the people who work for their organization and access their mainframes are intelligent and trustworthy, and are not really worth worrying about when their main focus should be on gangs trying to extort money or hostile nation states trying destroy their country’s competitors, or just damage the infrastructure of any country they view as hostile to them. That’s where an organization’s main security focus should be, surely?

Let’s start by deciding what an insider threat actually is. Let’s start with people who are employed by an organization. They have a valid userid and password and have a legitimate right to be accessing the mainframe. Now, every so often, humans will make mistakes. Some are small – and some can be quite major. It may be the case that your trusted insider accidentally deletes files or makes some other changes to the mainframe. Provided that person owns up straightaway, the IT team can usually solve the problem fairly promptly. Files can be restored from backups before other batch jobs that use those files are scheduled to run. And chaos can be averted.

Other insiders may be more malicious. They may have not got the internal promotion they were expecting or the pay rise that they needed. Other members of staff may have problems outside of the office, for example an increasing drug habit or an increasing use of alcohol. They may be running up gambling debts as they try to win back the money they have lost. Both groups are a problem. The disgruntled insiders may well deliberately cause damage to data or applications. They may have the authority to make other changes. And the second group of addicted users may well be manipulated by organized crime to infect the mainframe with some kind of malware that the bad actors associated with those criminals can use to launch a ransomware attack.

These days, the disgruntled employs can access Ransomware as a Service (RaaS) applications and launch an attack on the mainframe – hoping that the money they get from the ransom will compensate them for the money the company didn’t give them. It will also have to be enough to support their lifestyle once they go on the run.

Criminal gangs are also on the look out for credentials that can get them into the mainframe. Disgruntled staff or employees who need money to fund their habits will be approached and offered money for their userids and passwords. Using these, the bad actors can do what they want on the mainframe, safe in the knowledge that most tools processing SMF records won’t identify unusual activity by those accounts.

There’s another group of employees that might be targeted by criminal gangs, and those are people who need money. It may be that an ageing relative needs to go into a home and they need money to pay for that relative’s care. It may be that a family member needs an operation that needs to be paid for. Or a family member may need an expensive medication that they will have to pay for. These people may be vulnerable to exploitation by criminal gangs.

Of course, ordinary members of staff may be tricked by the use of an AI simulating the voice of their manager, who asks to ‘borrow’ the employee’s userid and password to do some work over the weekend.

Typically, security tools won’t send alerts if valid userids and passwords are used. And if the settings are changed so that an alert is sent, you get the situation where staff get so many false positives that they tend to ignore the messages.

Let’s see what the Cost of a Data Breach Report 2024 from IBM had to say about insider threats. The report says that the global average cost of a data breach in 2024 is US$4.88m, and the USA has the highest average data breach cost at US$9.36m. Compared to other vectors, malicious insider attacks resulted in the highest costs, averaging US$4.99 million. It goes on to say that among other expensive attack vectors were business email compromise, phishing, social engineering, and stolen or compromised credentials.

Using compromised credentials benefited attackers in 16% of breaches. Compromised credential attacks can also be costly for organizations, accounting for an average US$4.81 million per breach. Phishing came in a close second, at 15% of attack vectors, but in the end cost more, at US$4.88 million. Malicious insider attacks were only 7% of all breach pathways.

The report also found that the average time to identify and contain a breach fell to 258 days, however, whether credentials were stolen or used by malicious insiders, attack identification and containment time increased to an average combined time of 292 and 287 days respectively.

So, while insider threats aren’t the biggest threat to your mainframe, they are still a significant threat in the amount of money they can cost your organization as well as the amount of time it will take to recover from the attack. SMF is great, but security tools don’t usually send alerts when there is unusual activity by the accounts used by employees. So, these activities aren’t identified straight away and won’t be halted. Obviously, file integrity monitoring software would solve that problem before it became a serious problem. It would be able to identify an unusual activity and immediately suspend the job or user, and then send an alert. If it were a real systems programmer working at 2 in the morning from, say, Outer Mongolia, then, once this is confirmed, the job can be allowed to continue. But if you don’t have that type of software installed, guess what’s going to be filling your time for the next 258 days!

What I’m suggesting is that insider threats are a real issue, and SMF on its own isn’t enough.

Sunday, 13 October 2024

Is anyone really using AI on a mainframe?

We read a lot about artificial intelligence (AI) these days, and random people on LinkedIn message me about specific AI applications (not mainframe-based), but how can we really know what other sites are actually doing with AI on their mainframes?

Firstly, there was the Kyndryl survey that I wrote about in September. You can read it here. And now we have got the results from BMC’s mainframe survey, which you can find here. Their survey found that 45% of respondents listed artificial intelligence for IT operators (AIOps) and operational analytics as a top priority. The survey also found that 31% of respondents who have implemented AIOps perceive complexity as a major issue Tin addition, the survey found that 60% of extra-large mainframe organizations which are prioritizing AIOps are looking to solve this AIOps complexity issue using GenAI solutions, while 57 percent are using machine learning (ML)-based automation.

So, how many sites have actually got their hands dirty and are using some kind of AI? The survey found that 76% of organizations are using Generative AI (GenAI). GenAI is a type of AI that can create new content like images, videos, text, code, music, and audio. Analysing the data in a slightly different way, the survey found that 86% of respondents who are increasing their mainframe investment are using GenAI. It goes on to suggest that organizations with a flat or decreasing investment in their mainframe systems are significantly less likely to be using GenAI. The survey also found that 82% of those sites increasing their mainframe investment have a GenAI policy in place. I think the need for a GenAI policy cannot be overemphasized, and I pleased to see so many sites have one in place.

What benefits are those sites using GenAI finding they’re getting? The survey found that the benefits included significant improvements in efficiency and operational performance, with 40% reporting notable advancements. Where organizations were prioritizing AIOps, 45% of sites reported that GenAI is the most important capability to help them achieve their objectives.

What are the benefits of using GenAI to automate and optimize IT operations? The survey highlighted four areas, which were: 

  • Automation: 37% of organizations want to use GenAI to eliminate repetitive tasks, improving efficiency and freeing up resources for strategic activities. 
  • Identifying issues and risks: 36% of organizations want to analyse code and configuration files to identify problems and vulnerabilities, enhancing security. 
  • Gaining insights: 34% of organizations want to augment existing expertise with critical business insights, supporting decision-making processes. 
  • Training: 33% of organizations plan to use GenAI for onboarding and training new personnel, effectively bridging the knowledge gap.

What can we learn from this? I think we’re well past the toe-in-the-water stage of AI use on a mainframe. However, I’d like to see those figures cross the 50% threshold in order to view AI as completely accepted as a mainframe technology. From my own personal interest in mainframe security, I’d like to see close to 100% of sites using AI as part of their security posture against malware, ransomware, and people using AI as an easy way of breaching an organizations mainframe security.

Let’s take a quick look at some of the other results from that survey. 94% of respondents viewed the mainframe as a long-term platform or a platform for new workloads, which is heartening. And 90% of respondents said that their organizations are continuing to invest in their mainframes – hooray!

What priorities did they find in the survey? 64% of respondents had compliance and security as top of their list. Ransomware is also high on people’s agenda, but, worryingly, there was an 8% drop in those sites that found their ransomware controls to be extremely effective. As I’ve written about before, the bad actors are making it easier for non-experts to use their technology to breach mainframes. Cost optimization was also a top priority, and so was AIOps. Other respondents are looking at connecting mainframes to cloud-based workloads, and utilizing a cloud-based mainframe (mainframe as a service).

The survey also found that the use of Java for mainframe code is increasing. This, they suggest is not only because organizations want code that is accessible across platforms, but also because it allows developers to write mainframe code without needing additional training. The survey found both an increase in new applications being written in Java, as well as existing applications being rewritten in Java.

I always find surveys interesting to see what is going on at mainframe sites – or at least at the mainframe sites that are prepared to complete surveys. I think, the most significant result is the growth in the use of artificial intelligence on mainframes. So, to answer my title question, yes, people are using AI on the mainframe.

If you do like completing mainframe surveys, look out for the Arcati Mainframe Yearbook’s survey later in the year. You can find the whole thing, including the 2024 user survey report here.

Monday, 7 October 2024

Zowe LTS V3 released

Zowe the open source-software from the Open Mainframe Project of the Linux Foundation was originally launched to make it easy for IT specialists with no mainframe experience to be able to access and utilize data and applications on z/OS, using their knowledge and experience of tools that previously weren’t available on mainframes.

The Open Mainframe Project (OMP) describes Zowe as an open-source software framework for the mainframe that strengthens integration with modern enterprise processes and tools, offers vendors and customers the ability to execute on modernization initiatives with stability, security, interoperability, as well as easy installation and a continuous delivery model for receiving upgraded features.

On 3 October, the OMP announced the launch of Zowe’s Long Term Support (LTS) V3 Release. 

For mainframers who are still a little unfamiliar Zowe, the press release tells us that it’s an integrated and extensible open-source framework for z/OS, and that it comes with a core set of applications out of the box in combination with the APIs and OS capabilities future applications will depend on. It offers modern interfaces to interact with z/OS and allows users to work with z/OS in a way that is similar to how they will have worked on cloud platforms. Developers can use these interfaces as delivered or through plug-ins and extensions that are created by clients or third-party vendors. For example, Zowe V3 offers new support for the IntelliJ Zowe Explorer plugin as well as the simplified install wizard.

The press release lists some of the benefits of the LTS V3 including:

  • Durability: a refreshed number of core components that make up the software stack to give a secure stable shelf life, which ensures years of use with continued updates and support.
  • Stability: the installation and configuration have been stabilized through V3. Organizations can confidently adopt the technology for enterprise use and upgrade when appropriate for their environment, minimizing the risk of disruption.
  • Enhanced security: an enhanced security posture by actively monitoring dependencies and upgrading them proactively. This helps mitigate risks associated with outdated or vulnerable dependencies, offering more robust security features compared to earlier versions.

The new release of Zowe increases product durability, stability, and security with the support of a large open-source community and a Conformance Program.

Because of my long association with the Arcati Mainframe Yearbook, I am always pleased to see its survey results quoted in press releases. This one says: “According to the Arcati Mainframe Yearbook 2024, the independent annual guide for users of mainframe systems, 85% of mainframe organizations will be adopting Zowe by the end of the year or have already adopted it into their modern enterprise solutions.”

“The continued success of Zowe as a community-driven project highlights the importance of the mainframe as an open platform supporting hybrid cloud architectures”, said George Decandio, chief technology officer, Mainframe Software Division, Broadcom. “The latest V3 release introduces new components that expand capabilities to client SDKs and additional IDEs, reflecting Zowe’s ongoing evolution to meet the needs of the mainframe ecosystem. Notably, this update enhances the Zowe API Mediation Layer, a key component our customers view as essential in transforming the role of the mainframe in their multi-platform environments.”

“Zowe’s progress underscores a broader commitment to open, interoperable standards, enabling organizations to maximize the value of their mainframe and IT infrastructure investments”, said Decandio. “Broadcom is proud to be a leading contributor to this community and is committed to supporting the project’s continued growth.”

“Zowe V3 is the culmination of five years of work by volunteers from around the world”, said Bruce Armstrong, IBM Z Principal Product Manager at IBM and member of the Zowe Advisory Council (ZAC). “I am particularly proud of the fact that Zowe has revolutionized access to z/OS-based services for thousands of next-generation developers and system programmers that will continue the platform’s success for decades to come.”

“Rocket Software is a proud founding contributor of Zowe”, said Tim Willging, Fellow and VP of Software Engineering at Rocket Software. “It’s been incredible to see the success and passion of the open-source community in supporting hybrid cloud initiatives. The expanded capabilities in the V3 release will help accelerate an organization’s modernization journey and provide them with enhanced security, maintainability, and scalability needed to match their customers’ needs – now and in the future.”

Zowe is a contributor-led community with participating vendors such as, but are not limited to, Broadcom, IBM, Phoenix Software, Rocket Software, and Vicom Infinity. As a result of their extensive collaboration, the following Zowe extensions have been transformed in Zowe V3:

  • Explorer for Intellij provides the developers within the IntelliJ IDEs with the capability to work with the z/OS platform.
  • Kotlin and Java SDKs are Generally Available Extensions simplifying interaction with z/OS from the Java and Kotlin applications.
  • The IMS service and the current CLI extensions are archived. IBM is working on replacements.
  • The Zowe Conformance Program is updated with LTS V3 Guidelines.

Aimed to build a vendor-neutral ecosystem around Zowe, the OMP’s Zowe Conformance Program was launched in 2019. The program has helped OMP members incorporate Zowe with new and existing products that enable integration of mainframe applications and data across the enterprise.

To date, 77 products have implemented extensions based on the Zowe framework and earned these members conformance badges.

Additional resources include the Zowe GitHub Repository, the Zowe Community Website, and the Getting Started documentation site.

The Open Mainframe Project is an open source initiative that enables collaboration across the mainframe community to develop shared tool sets and resources. It is intended to serve as a focal point for deployment and use of Linux and open source in a mainframe computing environment. With a vision of open source on the mainframe as the standard for enterprise-class systems and applications, the project’s mission is to build community and adoption of open source on the mainframe by eliminating barriers to open source adoption on the mainframe, demonstrating value of the mainframe on technical and business levels, and strengthening collaboration points and resources for the community to thrive.

 

Sunday, 22 September 2024

AI is all the rage!

Perhaps no surprises there. Pixel phones now come with Gemini. My video editing software has AI integration. My Opera browser comes with AI. Just about everything has some sort of AI integrated. So, it won’t come as a shock to find mainframers are as keen on AI as everyone else.

That’s what Kyndryl are telling us based on the results of their recent State of Mainframe Modernization Survey. They say that 2024 is the year of AI adoption on the mainframe. The survey also found that modernization projects are delivering significant financial benefits, however, many organizations face skills shortages, preventing the transformation of complex mission-critical systems. Although, to be honest, that may be a good thing. After all, plenty of applications are best placed on a mainframe rather than trying to convert everything to run in the cloud. We’ve rehearsed the arguments for and against cloud in these blogs before, highlighting what works best in the cloud and what doesn’t.

Kyndryl surveyed 500 business and IT leaders and found that 86% of respondents are adopting AI and generative AI to accelerate their mainframe modernization initiatives. In addition, a third of respondents said that mainframes have become a foundation for running AI-enabled workloads. Lastly, almost half the people surveyed aim to use generative AI to unlock and transform critical mainframe data into actionable insights.

The survey also found that IT modernization projects and patterns are resulting in substantial business results, including triple-digit one-year return on investment (ROI) of 114% to 225%, and collective savings of $11.9 billion annually. Not surprisingly, most organizations have chosen a hybrid IT strategy.

According to the survey, 86% of respondents think that mainframes remain essential l (why not 100%?). In addition, the survey found that 96% of respondents are migrating a portion (on average that portion is 36%) of their applications to the cloud.

The survey found that organizations are running 56% of their critical workloads on a mainframe. Over half of the respondents said mainframe usage increased this year and 49% expect that trend to continue.

Other findings in the survey included the fact that many respondents are still facing a skills shortage, especially in new areas such as generative AI, which they hope will facilitate mainframe transformation and help alleviate the skills gap. Not surprisingly, security skills are in high demand because of increasing regulatory compliance requirements, with almost all respondents flagging security as the key factor driving modernization decisions. As a consequence, 77% of organizations in the survey are using external providers to deliver mainframe modernization projects.

Interestingly, respondents identified enterprise-wide observability as critical to effectively leveraging all data across their hybrid IT environment. In fact, 92% of respondents indicated that a single dashboard is important for monitoring their operations, but 85% stated they find it difficult to do this properly.

The survey was carried out by Coleman Parkes Research.

Sometimes I feel a bit like King Knut (Canute) sitting there trying to tell the tide to go back because I am forever telling people that the mainframe is the most modern computing platform currently available. It’s very difficult to modernize something that is already that modern! However, I, of course, recognize that other platforms have advantages in certain areas. I don’t carry a mainframe when I go to a business meeting, I use my very slim laptop. Likewise, cloud computing offers lots of benefits too. But I wouldn’t consider converting my mainframe applications (with all their spaghetti like integration with other applications) to run on a laptop, nor would I want to move them all to the cloud. It’s horses for courses, as they say.

Personally, I like the idea of artificial intelligence, when it can do lots of useful tasks that I may not want to do, and I am not surprised to see mainframe sites embracing its usage. I will be interested to see what the BMC 19th Annual Mainframe Survey finds about the adoption of AI on mainframes when its results are published this week.

 

Sunday, 8 September 2024

A chip off the new block

IBM may not have announced a new mainframe, but it has told us all about the chips that will be powering those mainframes – and it’s very much aimed at making artificial intelligence (AI) software run faster and better.

Let’s take a look at the details.

Back in 2021, we heard about the Telum I processor with its on-chip AI accelerator for inferencing. Now we hear that the Telum II processor has improved AI acceleration and has an IBM Spyre™ Accelerator. We’ll get to see these chips in 2025.

The new chip has been developed using Samsung 5nm technology and has 43 billion transistors. It will feature eight high-performance cores running at 5.5GHz. The Telum II chip will include a 40% increase in on-chip cache capacity, with the virtual L3 and virtual L4 growing to 360MB and 2.88GB respectively. The processor integrates a new data processing unit (DPU) specialized for IO acceleration and the next generation of on-chip AI acceleration. These hardware enhancements are designed to provide significant performance improvements for clients over previous generations.

Because the integrated DPU has to handle tens of thousands of outstanding I/O requests, instead of putting the it behind the PCIe bus, it is coherently connected and has its own L2 cache. IBM says this increases performance and power efficiency. In fact, there are ten 36MB of L2 caches with eight 5.5GHz cores running fixed frequency. The onboard AI accelerator runs at 24 trillion operations per second (TOPS). IBM claims the new DPU offers increased frequency, memory capacity, and an integrated AI accelerator core. This allows it to handle larger and more complex datasets efficiently. In fact, there are ten 36MB of L2 caches with eight 5.5GHz cores running fixed frequency. The onboard AI accelerator runs at 24 tera-operations per second (TOPS).

You might be wondering why AI on a chip is so important. IBM explains that its AI-driven fraud detection solutions are designed to save clients millions of dollars annually.

The compute power of each accelerator is expected to be improved by a factor of 4, reaching that 24 trillion operations per second we just mentioned. Telum II is engineered to enable model runtimes to sit side by side with the most demanding enterprise workloads, while delivering high throughput, low-latency inferencing. Additionally, support for INT8 as a data type has been added to enhance compute capacity and efficiency for applications where INT8 is preferred, thereby enabling the use of newer models.

New compute primitives have also been incorporated to better support large language models within the accelerator. They are designed to support an increasingly broader range of AI models for a comprehensive analysis of both structured and textual data.

IBM has also made system-level enhancements in the processor drawer. These enhancements enable each AI accelerator to accept work from any core in the same drawer to improve the load balancing across all eight of those AI accelerators. This gives each core access to more low-latency AI acceleration, designed for 192 TOPS available when fully configured between all the AI accelerators in the drawer.

Brand new is the IBM Spyre Accelerator, which was jointly developed with IBM Research and IBM Infrastructure development. It is geared toward handling complex AI models and generative AI use cases. The Spyre Accelerator will contain 32 AI accelerator cores that will share a similar architecture to the AI accelerator integrated into the Telum II chip. Multiple IBM Spyre Accelerators can be connected into the I/O Subsystem of IBM Z via PCIe.

The integration of Telum II and Spyre accelerators eliminates the need to transfer data to external GPU-equipped servers, thereby enhancing the mainframe's reliability and security, and can result in a substantial increase in the amount of available acceleration.

Both the IBM Telum II and the Spyre Accelerator are designed to support a broader, larger set of models with what’s called ensemble AI method use cases. Using ensemble AI leverages the strength of multiple AI models to improve overall performance and accuracy of a prediction as compared to individual models.

IBM suggests insurance claims fraud detection as an example of an ensemble AI method. Traditional neural networks are designed to provide an initial risk assessment, and when combined with large language models (LLMs), they are geared to enhance performance and accuracy. Similarly, these ensemble AI techniques can drive advanced detection for suspicious financial activities, supporting compliance with regulatory requirements and mitigating the risk of financial crimes.

The new Telum II processor and IBM Spyre Accelerator are engineered for a broader set of AI use cases to accelerate and deliver on client business outcomes. We look forward to seeing them in the new IBM mainframes next year.

 

Sunday, 1 September 2024

Cybersecurity Assistance

There are two areas that I am particularly interested in. They are artificial intelligence (AI) and mainframe security. And IBM has just announced a generative AI Cybersecurity Assistant.

Worryingly, we know that ransomware malware is now available for people to use to attack mainframe sites – that’s for people who may not have a lot of mainframe expertise. It’s totally de-skilled launching a ransomware attack on an organization. We also know from IBM’s Cost of a Data Breach Report 2024 that organizations using AI and automation lowered their average breach costs compared to those not using AI and automation by an average of US$1.8m. In addition, organizations extensively using security AI and automation identified and contained data breaches nearly 100 days faster on average than organizations that didn’t use these technologies at all.

The survey also found that among organizations that stated they used AI and automation extensively, about 27% used AI extensively in each of these categories: prevention, detection, investigation, and response. Roughly 40% used AI technologies at least somewhat.

So that makes IBM’s new product good news for most mainframe sites. Let’s take a more detailed look.

Built on IBM’s watsonx platform, this new GenAI Cybersecurity Assistant for threat detection and response services, enhances alert investigation for IBM Consulting analysts, accelerating threat identification and response. The new capabilities reduce investigation times by 48%, offering historical correlation analysis and an advanced conversational engine to streamline operations.

That means IBM’s managed Threat Detection and Response (TDR) Services utilized by IBM Consulting analysts now has the Cybersecurity Assistant module to accelerate and improve the identification, investigation, and response to critical security threats. The product “can reduce manual investigations and operational tasks for security analysts, empowering them to respond more proactively and precisely to critical threats, and helping to improve overall security posture for client”, according to Mark Hughes, Global Managing Partner of Cybersecurity Services, IBM Consulting.

IBM’s Threat Detection and Response Services is said to be able to automatically escalate or close up to 85% of alerts; and now, by bringing together existing AI and automation capabilities with the new generative AI technologies, IBM’s global security analysts can speed the investigation of the remaining alerts requiring action. As mentioned earlier, the best figure they are quoting for reducing alert investigation times using this new capability is 48% for one client.

Cybersecurity Assistant cross-correlates alerts and enhances insights from SIEM, network, Endpoint Detection and Response (EDR), vulnerability, and telemetry to provide a holistic and integrative threat management approach.

By analysing patterns of historical, client-specific threat activity, security analysts can better comprehend critical threats. Analysts will have access to a timeline view of attack sequences, helping them to better understand the issue and provide more context to investigations. The assistant can automatically recommend actions based on the historical patterns of analysed activity and pre-set confidence levels, which can reduce response times for clients and so reduce the amount of time that attackers are inside an organization’s network. By continuously learning from investigations, the Cybersecurity Assistant’s speed and accuracy is expected to improve over time.

The generative AI conversational engine in the Cybersecurity Assistant provides real-time insights and support on operational tasks to both clients and IBM security analysts. It can respond to requests, such as opening or summarizing tickets, as well as automatically triggering relevant actions, such as running queries, pulling logs, command explanations, or enriching threat intelligence. By explaining complex security events and commands, IBM’s Threat Detection and Response Service can help reduce noise and boost overall security operations centre (SOC) efficiency for clients.

Anything that can accelerate cyber threat investigations and remediation has got to be good, which this product does using historical correlation analysis (discussed above). Its other significant feature is its ability to streamline operational tasks, which it does using its conversational engine (also discussed above).

There really is an arms race between the bad actors and the rest of us. Anything that gives our side an advantage, no matter how briefly that might be for, has got to be good. Plus, it provides a stepping stone to the next advantage that some bright spark will give us. No-one wants their data all over the dark web, and few companies can afford the cost of fines for non-compliance as well as court costs and payments to people whose data is stolen.

Sunday, 18 August 2024

The cost of a data breach 2024 – part 2

Last time, we looked at the highlights of IBM’s Cost of a Data Breach Report 2024. We saw that the average cost of a breach was US$4.88m, with the average cost of a malicious insider attack costing US$4.99m. Also, the average time to identify and contain a breach was 258 days, which is lower than previous years, but still a very long time.

This time, I wanted to drill down a bit further into the report. For example, it tells us that AI and automation are transforming the world of cybersecurity. Worryingly, they make it easier than ever for bad actors to create and launch attacks at scale. On the plus side, they also provide defenders with new tools for rapidly identifying threats and automating responses to those threats. The report found these technologies accelerated the work of identifying and containing breaches and reducing costs.

The report also found that the number of organizations that used security AI and automation extensively grew to 31% in this year’s study from 28% last year. Although it’s just a 3-percentage point difference, it represents a 10.7% increase in use. The share of those using AI and automation on a limited basis also grew from 33% to 36%, a 9.1% increase.

The report also found that the more organizations used AI and automation, the lower their average breach costs were. Organizations not using AI and automation had average costs of US$5.72m, while those making extensive use of AI and automation had average costs of US$3.84m, a savings of US$1.8m.

Another plus found by the report was that organizations extensively using security AI and automation identified and contained data breaches nearly 100 days faster on average than organizations that didn’t use these technologies at all.

Among organizations that stated they used AI and automation extensively, about 27% used AI extensively in each of these categories: prevention, detection, investigation, and response. Roughly 40% used AI technologies at least somewhat.

When AI and automation were used extensively in each of those four areas of security, it dramatically lowered average breach costs compared to organizations that didn’t use the technologies in those areas. For example, when organizations used AI and automation extensively for prevention, their average breach cost was US$3.76m. Meanwhile, organizations that didn’t use these tools in prevention saw US$5.98m in costs, a 45.6% difference. Extensive use of AI and automation reduced the average time to investigate data breaches by 33%m and to contain them by 43%.

 

Even after a breach is contained, the work of recovery goes on. For the purposes of the report, recovery meant: business operations are back to normal in areas affected by the breach; organizations have met compliance obligations, such as paying fines; customer confidence and employee trust have been restored; and organizations have put controls, technologies and expertise in place to avoid future data breaches. Only 12% of organizations surveyed said they had fully recovered from their data breaches. Most organizations said they were still working on them.

Among the organizations that had fully recovered, more than three-quarters said they took longer than 100 days. Recovery is a protracted process. Roughly one-third of organizations that had fully recovered said they required more than 150 days to do so. A small share, 3%, of fully recovered organizations were able to do so in less than 50 days.

 

This year’s report found most organizations reported their breaches to regulators or other government agencies. About a third also paid fines. As a result, reporting and paying fines have become common parts of post-breach responses. Most organizations reported the breach within a few days. Over half of organizations reported their data breach in under 72 hours, while 34% took more than 72 hours to report. Just 11% were not required to report the breach at all. More organizations paid higher regulatory fines, with those paying more than US$50,000, rising by 22.7% over last year, and those paying more than US$100,000, rising by 19.5%.

 

About 40% of all breaches involved data distributed across multiple environments, such as public clouds, private clouds, and on premises. Fewer breaches in the study involved data stored solely in a public cloud, private cloud, or on premises. With data becoming more dynamic and active across environments, it’s harder to discover, classify, track, and also secure.

Data breaches solely involving public clouds were the most expensive type of data breach, costing US$5.17m, on average, a 13.1% increase from last year. Breaches involving multiple environments were more common but slightly less expensive than public cloud breaches. On-premises breaches were the least costly.

The more centralized control organizations had over their data, the quicker on average they could identify and contain a breach. Breaches involving data stored solely on premises took an average of 224 days to identify and contain, 23.3% less time than data distributed across environments, which took 283 days. The same pattern of local control and shortened breach life-cycles showed up in the comparison between private cloud architectures and public cloud architectures.

The average cost of a data breach involving shadow data was US$5.27m, 16.2% higher than the average cost without shadow data. Breaches involving shadow data took 26.2% longer on average to identify and 20.2% longer on average to contain than those that didn’t. These increases resulted in data breaches lasting an average lifecycle of 291 days, 24.7% longer than data breaches without shadow data.

While shadow data was found in every type of environment – public and private clouds, on premises and across multiple environments – 25% of breaches involving shadow data were solely on premises. That finding means shadow data isn’t strictly a problem related to cloud storage.

Mega breaches, characterized by more than 1 million compromised records, are relatively rare. The average cost of all mega breach size categories was higher this year than last. The jump was most pronounced for the largest breaches, affecting between 50 million and 60 million records. The average cost increased by 13%, and these breaches were many times more expensive than a typical breach. For even the smallest mega breach – 1 million to 10 million records – the average cost was nearly nine times the global average cost of US$4.88m.

 

Key factors that reduced costs of a data breach included employee training and the use of AI and machine learning insights. Employee training continues to be an essential element in cyber-defence strategies, specifically for detecting and stopping phishing attacks. AI and machine learning insights closely followed in second place.

The top three factors that increased breach costs in this analysis were security system complexity, security skills shortage, and third-party breaches, which can include supply chain breaches.

 

70% of organizations in the study experienced a significant or very significant disruption to business resulting from a breach. Only 1% described their level of disruption as low. The average breach costs were higher when business disruption was greater. Even organizations that reported low levels of disruption incurred average data breach costs of US$4.63m. For organizations that reported very significant disruptions, average costs were 7.9% higher, at US$5.01m.

Most organizations said they planned to increase prices of goods and services following a data breach. 63% of organizations surveyed planned to pass the costs on to customers, a 10.5% increase.

 

This is a report that not only the IT team need to read, but also the chief financial officer because it will be that person who will be responsible for paying company money for the ransom, the fines for lack of compliance, and any court settlements to people whose data has been stolen.

Sunday, 11 August 2024

The cost of a data breach 2024

I do seem to be banging on about security recently, but it really is so important. No-one wants to find that their personally identifiable information has been stolen and is currently being shared all over the dark web. And no-one wants to find that their mainframe or other platforms have had all their data stolen and they are looking at massive fines, compensation payments, and loss of customers and future revenue.

But, how do you know exactly how bad things are out there? How do you find out how much it is costing organizations that have been hacked and faced a ransom payment. One answer is the Cost of a Data Breach Report 2024 from IBM. Their headline statistic is that the global average cost of a data breach in 2024 is US$4.88m, which is a 10% increase over last year and the highest total ever. The USA had the highest average data breach cost at US$9.36m. Other countries in top 5 were the Middle East, Germany, Italy, and Benelux (Belgium, the Netherlands, and Luxembourg).

The report also identifies an issue with shadow data, saying that it is involved in 1 in 3 breaches. They suggest that the proliferation of data is making it harder to track and safeguard. Slightly better news is the finding that US$2.22m is the average cost saving for organizations that used security AI and automation extensively in prevention versus those that didn’t.

The USA had the highest average data breach cost at US$9.36m. Other countries in top 5 were the Middle East, Germany, Italy, and Benelux.

Looking in more detail at the report, we find that more than half of breached organizations are facing high levels of security staffing shortages and it’s getting worse. The issue shows a 26.2% increase from the previous year. In cash terms, that corresponded to an average US$1.76m more in breach costs. The report goes on to say that even as 1 in 5 organizations say they used some form of gen AI security tools, which are expected to help close the gap by boosting productivity and efficiency, this skills gap remains a challenge.

Many organizations trust their all the employees, and yet the report says that the average cost of a malicious insider attack is now US$4.99m. The report says that compared to other vectors, malicious insider attacks resulted in the highest costs but were only 7% of all breach pathways. Other expensive attack vectors were business email compromise, phishing, social engineering, and stolen or compromised credentials.

Phishing and stolen or compromised credentials ranked among the top 4 costliest incident types. Compromised credentials topped initial attack vectors. Using compromised credentials benefited attackers in 16% of breaches. Compromised credential attacks can also be costly for organizations, accounting for an average US$4.81m per breach. Phishing came in a close second, at 15% of attack vectors, but in the end cost more, at US$4.88m. Gen AI may be playing a role in creating some of these phishing attacks. For example, gen AI makes it easier than ever for even non-English speakers to produce grammatically correct and plausible phishing messages.

Watching TV and movies might make you think that breaches are usually discovered fairly promptly and dealt with the next day. Sadly, the report found that breaches involving stolen or compromised credentials took the longest to identify and contain of any attack vector. That was 292 days. Similar attacks that involved taking advantage of employees and employee access also took a long time to resolve. For example, phishing attacks lasted an average of 261 days, while social engineering attacks took an average of 257 days.

The good news is that the average time to identify and contain a breach fell to 258 days, reaching a 7-year low, compared to 277 days last year. The report points out that this global average of mean time to identify (MTTI) (194 days) and mean time to contain (MTTC) (64 days) excludes Benelux because, as a new region in the study, it was having outsized influence and skewed results much more than the average.

Ransomware victims that involved law enforcement ended up lowering the cost of the breach by an average of nearly US$1m, although that excludes the cost of any ransom paid. Involving law enforcement also helped shorten the time required to identify and contain breaches from 297 days to 281 days.

The industrial sector experienced the costliest increase of any industry, rising by an average US$830,000 per breach over last year. This cost spike could reflect the need for industrial organizations to prepare for a more rapid response, because organizations in this sector are highly sensitive to operational downtime. However, the time to identify and contain a data breach at industrial organizations was above the median industry, at 199 days to identify and 73 days to contain.

Healthcare is still the costliest in terms of a data breach at US$9.77m, but that was down from US$10.93 in 2023. Financial is the second costliest sector at US$6.08m this year, with Industrial third with an average cost of US$5.56m.

Nearly half of all breaches (46%) involved customer personal identifiable information (PII), which can include tax identification (ID) numbers, emails, phone numbers, and home addresses. Intellectual property (IP) records came in a close second (43% of breaches). The cost of IP records jumped considerably from last year, to US$173 per record in this year’s study from US$156 per record in last year’s report.

The costs from lost business and post-breach response rose nearly 11% over the previous year, which contributed to the significant rise in overall breach costs. Lost business costs include revenue loss due to system downtime, and the cost of lost customers and reputation damage. Post-breach costs can include the expense of setting up call centres and credit monitoring services for impacted customers, and paying regulatory fines.

Worryingly, 45% of all breaches were caused by IT failures or human error. The breakdown is 23% are due to IT failure and 22% are due to human error.

Interestingly, security teams and their tools detected breaches 42% of the time. Benign third parties detected the breach 34% of the time, and attackers themselves identified the breach 24% of the time. Security teams are getting better at discovering breaches because the 2023 figure for identification was 33% of the time. When a breach was disclosed by an attacker, the average cost was US$5.53m. However, when a security team identified a breach, the average cost was US$4.55m.

Even so, no-one can be complacent. It’s still taking a long time to detect a breach. It’s still costing companies a lot of money. More needs to be done to protect individual’s data, don’t you think?

It’s a really useful report by the Ponemon Institute for IBM.

There will be more details from the report next time.

Sunday, 4 August 2024

Perhaps not the best way to deal with a data leak

I’ve written and spoken about security many times, but usually I have been suggesting to people what they might consider doing or not doing in order to keep their data safe. Even if everyone took my advice, I would still be worried whether they were completely secure because it’s a continual arms race between the hackers and the large organizations that use mainframes to maintain their security and keep their data safe. New software updates are installed that might contain previously-unknown backdoors. Patches to lock those back doors aren’t always installed quickly enough, so bad actors can use them. Staff members still click on attachments to emails that trigger malware, or they click on links and receive unexpected drive-by malware on their laptops. And there are numerous other ways that the bad actors can get onto your mainframe including, probably, new ones that most of us haven’t heard of yet!

But once you have been hacked, once the bad actors have accessed your computers, exfiltrated your data, encrypted your copy of the data, and left a ransom demand, what should you do? Let’s take a look at how one company dealt with a massive loss of data. It’s been in the news, so I don’t feel I need to keep its name secret, it’s NTT Data Romania.

NTT – Nippon Telegraph and Telephone – was established as a state monopoly in 1952 to take over the Japanese telecommunications system that was being operated by AT&T. NTT was privatized in 1985 to encourage competition in the country's telecom market.

NTT Data is a Japanese multinational information technology service and consulting company that originated in 1988. It is a partly-owned subsidiary of NTT. It acquired Keane Inc in 2010 and Dell Services in 2016, and other international companies. NTT Data mainly services non-NTT Group companies. NTT Data Romania was formed in 2000.

That’s a little bit of the company’s history. So, why am I discussing it as something we could all learn from in terms of a cyberattack?

RansomHub, the ransomware group, claimed that they had exfiltrated (stolen) 230GB of sensitive data from the company during an attack that was first detected on 14 June. The bad actors set a ransom deadline of 5 July or else they would publish the data they had stolen.

So, what would your company do if it happened to you? Would you alert your chief financial officer to get ready to pay out a huge amount of money in compensation and fines? Or would you decide to keep quiet about everything? NTT DATA Romania officially denied that a ransomware attack took place. They said in a statement to Romania Journal, “No ransomware attack. While there has certainly been some suspicious activity detected relating to a legacy server, the quick response taken by our security team prevented any further damage.

“On 14th June, suspicious activity was detected by our security monitoring team on a legacy server, separate from our corporate network. We immediately activated our Incident Response protocols and rendered the entire environment completely inaccessible and inactive.

“Additional measures to mitigate any further risk and protect the data of our customers were also activated. At this time, there is no visibility that client data has been affected.

“We are conducting an in-depth investigation into the situation and take the security of our client data very seriously.”

Who, within an organization, do you think would decide to keep quiet about a ransomware attack? In this case, three internal messages were sent by the CEO, Maria Metz, on 17, 18, and 24 June. Apparently, the first message confirmed the penetration and compromise of several platforms and services, and asked employees not to come to the company's offices, because they wouldn’t be able to access the data networks. Employees were also asked not to tell anyone outside the company about this crisis, including customers, suppliers, partners, the press, or other people.

You might call me cynical, but I don’t think that plan is going to work, do you? People naturally talk – especially when everyone asks them why they’ve not gone into the office.

With what you’ve seen already, you’ll not be surprised that the company denied the severity of the situation. In response to that, the hackers posted samples of the data, which apparently includes accounting, financial planning, and internal documents of every type and purpose. There’s also personal and recruitment data, project and business data, backup files, client and financial data, as well as legal documents.

You might be thinking, “poor old NTT Data”, but NTT companies seem to be having a bad time recently. NTT West’s president Masaaki Moribayashi resigned in March, following the leak of data relating to 9.28 million customers, which became known in October last year. And now NTT Data Romania in June this year.

I guess no-one wants to publicize their failings, and organizations are the same. However, there comes a time when the optics of owning up and taking steps to remediate the problem and appease the customers whose data has been stolen seems a better approach than trying to deny anything happened and asking staff to keep silent. I’m sure any stranger standing in the middle of a local supermarket or bar could have gathered the who story quickly enough by listening to what people were chatting about.

The other thing is that if your organization is hacked and you fix the problem, and then tell every similar organization how they could be hacked and what they need to do to prevent the same problem occurring to them, you now seem like one of the good guys, don’t you think?

The NTT West hack was, it’s claimed, an inside job. If NTT Data Romania’s was also an inside job, it should make senior staff wonder about the culture within their organization, and the quality and dedication of the staff working for that organization – including in senior management. Customers of NTT Data Romania must be waiting to for their information to start turning up of the dark web, and are probably discussing with their lawyers what sort of compensation they should be demanding from the company. And at the back of their minds, they must be wondering, if NTT Data Romania is keeping quiet about something big like a data loss on this scale, what else is it not telling them?

Sunday, 28 July 2024

Ransomware – some recent thoughts

Cybersecurity technology and information security company, Cisco Talos, recently published some interesting information on the tactics, techniques, and protocols (TTPs) used by the top 14 ransomware groups. Let’s see what we can learn from it.

Firstly, they looked at the steps in a ransomware attack, which won’t come as a surprise. The steps were:

  • Gain access to the targeted entity. Different techniques can be used, but the most common is social engineering, which usually involves sending emails containing malicious files or links that will run malware on the targeted system to targeted people. The malware allows the attacker to deploy more tools and malware to reach their goals, even bypassing multifactor authentication.
  • Scan Internet-facing systems for vulnerabilities or misconfigurations. Unpatched or legacy software is a particularly high risk.
  • Gain persistence. If the malware is identified and removed early on, the attack has failed. So, steps need to be taken to ensure permanence. With an attack on Windows, registry keys can be modified, or the malware can be auto-started at boot time. Local, domain, and/or cloud accounts can be created. On a mainframe, multiple copies of the malware might be stored, allowing a second copy to be activated if needed.
  • Network scanning to understand the infrastructure. This is where valuable data is identified. In addition, privilege levels need to be raised to administrator levels. On a mainframe, the order these two sub-steps would probably be reversed.
  • Data exfiltration. The valuable data, usually personally identifiable data, eg names, address, social security numbers, bank account details, etc, is then stolen. That might be the end of the attack.
  • Data encryption. Encrypting the data allows the bad actors to send a ransom to the organization that has been attacked. Unless the ransom is paid, the target organization won’t get the key to decrypt their data.

I would suggest that the attackers might also look for links to other organizations. These supply-chain attacks allow the bad actors to use one attack to get into the systems of multiple organizations.

Cisco Talos does offer some suggestions of how to mitigate the threat of ransomware. These are:

  • Apply patches and updates to systems and software to reduce the risk of exploits being used to access a system.
  • Implement complex and unique password policies and multifactor authentication.
  • Harden the attack surface by disabling unnecessary services and features and limiting the number of public-facing Internet services as much as possible.
  • Segment networks using virtual local area networks (VLANs) or similar technologies. Isolating sensitive data and systems from other networks prevents lateral movements from an attacker.
  • Monitor endpoints using a security information and event management (SIEM) system, and use endpoint detection and response (EDR) or extended detection and response tools.

One of the big problems facing the IT security team is the number of people working from home. Indusface, an application security SaaS company, has suggested nine ways to protect company data for people working remotely. Here are their suggestions:

  1. Provide company devices. This allows organizations to fully manage and secure the devices used to access company data. The devices should be updated and encrypted with SSL certificates. If that’s not possible, home-workers should be given everything they need to secure their own devices, eg anti-malware software.
  2. Scan and penetration test applications. Pen testing protects against data breaches by simulating real-world attacks on systems and highlighting vulnerabilities including privilege escalation attacks. Where vulnerabilities are identified, appropriate defensive measures can be taken.
  3. Utilize virtual private networks (VPNs) across the business. VPNs are easy to implement and protect data that could otherwise be vulnerable to attacks over an open public network.
  4. Deploy a web application firewall (WAF). This will protect web applications from attacks. An AI/ML based WAF should detects anomalies and block illegitimate requests even if they are made through compromised employee credentials.
  5. Employ encryption software. Encrypting sensitive files means that were someone able to steal the files, they would not be able to access the data or content. Security policies should ensure that all remote workers know how to encrypt files and when it is necessary. Routine checks should ensure the policy is being followed.
  6. Strict password management. Hackers rely on weak passwords when brute forcing point of sale (PoS) terminals. Use automatic password generators to create safe and secure passwords, and ensure that passwords are unique and never duplicated across multiple accounts. For sensitive data, employees should always implement multi-factor authentication (MFA), requiring users to provide multiple methods of verifying their identity.
  7. Rigorous access controls. Organizations should apply the principle of least privilege when it comes to access control, ie allowing users access to only the specific assets that they require for their work. Access to files should be revoked as soon as it is no longer necessary, such as when an employee leaves, or a person’s involvement in a project is over.
  8. Provide employees with what they need. To make their jobs easier, remote workers may implement tools, systems, or habits that are not sanctioned by the company. This shadow IT could include using risky apps and tools, sending files through unsecure channels, or storing assets somewhere unprotected. Provide remote workers with all the tools they may need to do their job effectively and ensure that they are aware of all the approved platforms that they have access to.
  9. Fully prepare and train remote workers. Organizations can implement security strategies, but efforts will be futile unless remote workers fully understand what the procedures are and why they are important. Training staff regularly and testing the effectiveness of the training (eg phishing email simulations) is important.

There are some useful hints and tips there. Although they are mainly PC-based ideas, accessing the Windows infrastructure may be just a short-step away from accessing an organization’s mainframe.

 

Sunday, 14 July 2024

Interesting browser updates

I was checking on Statcounter to see how popular different browsers were. I wasn’t surprised to see that Google’s Chrome was the most popular with nearly two-thirds (65.68%) of the market share. Safari came second with 17.96%, which probably gives an indication of the percentage of Macs, iPhones, and iPads in use out there. In third place is Edge. Everyone who has bought PC will have Edge as the default browser. To be honest, the first thing I do when I get a new laptop is download a different browser – and, judging by the figures, so do lots of other people. Firefox is fourth with 2.75%. I always used to use Firefox, and I liked using it. I just didn’t install it on my newest laptops. C’est la vie! I was surprised to see Samsung Internet in fifth place. I’d never considered using it, and I have a Samsung phone. It scored 2.58% of market share. Sixth was Opera with 2.26%.

Looking at figures for just North America, it came as no surprise to see Apple’s browser had nearly a third of the market share at 31.74%. Chrome had over half at 52.55%. In Europe, the figures were still in the same order, but Chrome had 61.89% of the market and Safari had 18.55%.

Still, whatever browser you choose, it’s still just a browser – and you only use it to access your webmail, or get to Amazon to do your shopping, or check your bank balance, book holiday, or go to a million other websites, don’t you?

Once you’ve personalized your browser, and got it to remember the user-id and password you use for the websites you visit frequently, and, especially, the ones you only visit once a year, you don’t really want to change it. After all, what extra could a different browser do?

I’ve just started using Opera, or Opera GX as it calls itself. Opera, the browser, has been around for 25 years and is available on laptops and mobile phones, and has recently had some new updates to its built-in artificial intelligence (AI) called Aria, which adds some interesting new features.

Firstly, it has the ability to turn text prompts and descriptions into unique images using the image generation model Imagen2 by Google. Aria identifies the user’s intention to generate an image based on conversational prompts. Users can also use the ‘regenerate’ option to have Aria come up with a new image. Aria allows each user to generate 30 images per day.

Secondly, Aria can now read answers out loud by using Google’s WaveNet model. It benefits those who normally use screen readers, like to multitask, or need to hear information instead of reading it. To get this to work, I was using the command line, I had to click on the speaker icon in the bottom right corner to have Aria read the text response. It was easy to pause the speaking by clicking the pause button that replaced the speaker icon. Clicking the speaker icon again restarted the dialogue.

Thirdly, it’s gaining contextual image understanding. They say that Internet users find themselves searching for information about something they saw just as often as for something they read or heard about. So, Aria is also gaining image understanding capabilities. This means that users can now upload an image to Aria. As part of the chat conversation, users can then ask the AI tool about it. For example, if the image is an unknown headset, Aria will identify its brand and model as well as provide some context about it. Or a user can take a picture of a maths problem and ask Aria how to solve it.

To get this to work I had to download the developer version of the browser and create an account, and sign in. Once I’d done that, I clicked on the ‘+’ button on the right of the chat input box, and then selected the ‘upload image’ option. The explanation of the context of the image was quite good.

As part of the update, the text-based chat experience with Aria has also been improved with the addition of two new functionalities: ‘Chat Summary’ and ‘Links to Sources’. The former provides users with a concise summary of an entire conversation with Aria, allowing them to recap the most important information. In the latter feature, Aria supplies the user with links to sources about the topic of the conversation, enabling them to get more context regarding their enquiry. In addition, the Aria command line in the browser can now be easily activated by pressing the ‘ctrl + /’ or ‘cmd + /’ button combination. This enables the user to open the additional floating window instead of using Aria from the extension page. There’s also a small icon on the left-hand side of the browser that opens up Aria.

Features that were already part of Opera GX that you might be interested in include: RAM, CPU, and network limiters, a built-in free VPN (virtual private network), Twitch and Discord integration (chat facilities used by gamers), and a built-in ad blocker

I’m quite enjoying using the browser. You might want to give it a try.

 

Sunday, 30 June 2024

Mainframe security – there really is a war going on

In the mainframe world, everyone has been talking about security for a very long time. In fact, I’ve seen some people yawn as the topic of security comes up again – “been there, done that, got the T-shirt” they say. But it’s not that easy. Just because all the security you had in place last year seems to have worked, doesn’t mean that it is secure enough for this year. There is a veritable arms race going on and no-one can afford to be complacent.

When I say no-one, I mean no-one in an organization can be complacent, perhaps least of all the chief financial officer (CFO). It’s the CFO’s job to safeguard their organization’s reputation and to save their company money. That was the job of the CFO at the USA’s second biggest health insurer, Anthem, which was hacked in December 2014. Nearly ten years later, the substantial cost to the company is only finally becoming clear.

That cyberattack saw 79 million individual's personal information compromised. Firstly, Anthem agreed to pay $115 million to those people whose information was potentially stolen. The plaintiffs’ case was that Anthem should pay their costs of checking whether the exfiltrated data was being used nefariously by anyone else. Then in 2020, Anthem agreed to pay $16 million to the US Department of Health and Human Services, Office for Civil Rights (OCR) and take substantial corrective action to settle potential violations of the Health Insurance Portability and Accountability Act (HIPAA) Privacy and Security Rules. Also in 2020, the company paid $39.5 million as part of a settlement with US states attorneys general from 44 states and Washington, DC. On top of that, there may well have been payments by Anthem for the ransom, and for technical experts to try and resolve the attack. All-in-all, a hefty pay out for any organization.

And that wasn’t a one-off attack. According to the Cost of a Data Breach Report from IBM Security, the average cost of a data breach is US$4.45 million. For companies, like Anthem, in the healthcare sector, the average cost of a data breach was US$10.93 million.

In the UK just recently, hospitals and GP practices found Russian hackers had infiltrated and rendered unusable the IT systems of Synnovis, a company that analyses blood tests. That led to hospitals having to cancel operations etc. From personal experience, I know of a small web design and hosting company that says its web sites are under constant attack. And I know of local secondary schools that have been attacked.

Everywhere and everyone that has any kind of tech is currently under attack. And, they need to do their bit in the arms race that’s taking place between us – I’m assuming we’re the good guys are reading this – and the people who are trying to hack your system.

Oxford Capital recently sent out a press release reminding us that the World Economic Forum has shown that ransomware attacks have increased by nearly 300%, with over 50% of these attacks specifically targeting small businesses. Oxford Capital then highlighted the top AI security threats organizations need to be prepared to combat. They were:

  • AI-powered phishing attacks using AI to create highly-convincing and personalized emails. These attacks are designed to deceive employees into revealing sensitive information or downloading malicious software.
  • Automated vulnerability exploits. Hackers are using AI to scan for and exploit vulnerabilities in software systems at an unprecedented speed and scale. That’s why installing patches is such a priority.
  • Deep fake scams are where cybercriminals use AI to create realistic audio and video impersonations of company executives. These deepfakes can be used to manipulate employees into transferring funds or sharing confidential information.
  • AI-driven ransomware allows attackers to efficiently target, copy, and encrypt critical business data. 
  • Malicious AI bots can be used to conduct malicious activities such as credential stuffing, where bots attempt to gain access to accounts using stolen credentials. 
  • Weak passwords are a major cybersecurity threat because they can be easily guessed or cracked, allowing unauthorized access to sensitive information.

The suggested solutions given by Oxford Capital include:

  • Strong password policies. If you don’t already do this, use complex passwords and update them regularly.
  • Multi-factor authentication (MFA) requires a user to present two (or more) items or factors to an authentication mechanism before they are given access.
  • Regularly update software to ensure that the latest security patches are installed and no easy-access back doors (vulnerabilities) are anywhere on your system.
  • Employee training. I’ve been part of this kind of exercise where you give everyone in your organization training to recognize phishing attacks and other cyber threats, and then later test random attendees. Even so, you still find staff click on your dodgy email. Therefore, I would suggest that training is ongoing.
  • Use robust cybersecurity measures. They recommend users invest in comprehensive security solutions to detect and respond to threats efficiently. I would suggest mainframe-related products like File Integrity Monitoring (FIM) from MainTegrity to provide not only protection, but also early warning if some kind of attack is taking place, as well as automation to suspend jobs and users until you’re sure they really are allowed to do what they seem to be doing to your mainframe.

The list might have added using air-gapped hardware to protect back-ups from being overwritten. As well as routinely protecting data in transit from being stolen.

What I’m suggesting is that everyone needs to take steps to protect whatever data they have on their computing platforms, including the cloud, and people with the most to lose, like mainframers, need to absolutely keep one step ahead in the data security arms race. And the CFO, and other top execs, need to make sure the IT team have everything they need in order to do that. After all, it’s those top execs who will be paying for it if mainframe security isn’t as good as it needs to be.