Sunday, 23 February 2025

Mainframes, AI, and security

I was interested to read IBM’s thoughts about AI on the mainframe, which was published in January. You can read it here. The article discusses the different ways that AI can be integrated with mainframes. The article tells us that on-chip AI accelerators can scale and process millions of inference requests per second at very low latency rates. This capability allows organizations to use data and transactional gravity by strategically co-locating large datasets, AI, and critical business applications. In the future, next-gen accelerators will open up new opportunities to expand AI capabilities and use cases as an organization’s needs grow.

It talks about ensemble AI, which it describes as a hybrid concept that integrates different AI technologies, such as traditional AI and LLM encoder models, to deliver faster, more accurate results than any single model can accomplish alone, tapping into the mainframe's massive processing power and data storage capabilities.

The article then discusses four potential use cases of AI on a mainframe. The first of these is real-time fraud detection, which can be of use to fintech companies. As an example, it discusses a large North American bank that had developed an AI-powered credit-scoring model and deployed it on an on-premises cloud platform to help fight fraud. However, only 20% of credit card transactions could be scored in real-time. The bank decided to move the complex fraud-detecting tools to its mainframe.

After the mainframe implementation, the bank began scoring 100% of credit card transactions in real-time, with 15,000 transactions per second, providing significant fraud detection.

Moreover, each transaction used to take 80 milliseconds to score. With the reduced latency provided by the mainframe, response times now occur in 2 milliseconds or less. This move to the mainframe has also saved the bank over US$20 million in annual fraud prevention spend without impacting service-level agreements.

The second example was IT operations and AIOps describing how organizations can now use AI to proactively prevent or even predict an outage caused by equipment failure. By applying AI mechanisms, organizations can detect anomalies at the transaction, application, subsystem, and system levels. For instance, sensors can analyse data from mainframe components to predict potential hardware failures and enable preventative maintenance. They say that organizations are increasingly turning to the application of AI capabilities to automate, streamline, and optimize IT infrastructure and operational workflows. AIOps enables IT operations teams to respond quickly to slowdowns and outages, providing better visibility and context.

The third example given is advanced document processing, saying that processing documents on the mainframe helps streamline and deliver accurate data extraction in a highly secure setting. Organizations can use gen AI to summarize financial documents and business reports, extract key data points (for example, financial metrics and performance indicators), and identify essential information for compliance processes (for example, financial audits).

And lastly in their list are AI code assistants. They affirm that virtual assistants on the mainframe are helping to bridge the developer skill gap. Tools, such as IBM® watsonx Code Assistant™ for Z, use generative AI to analyse, understand and modernize existing COBOL applications. This capability allows developers to translate COBOL code into languages like Java. It also accelerates application modernization while preserving legacy COBOL systems' functionality. Watsonx Code Assistant for Z features include code explanation, automated refactoring and code optimization advice, making it easier for developers to maintain and update old COBOL applications.

Now I’m not saying anything against those four areas. In fact, I totally support them as great uses of AI on a mainframe. However, I would have thought that one area where AI assistance would be needed is in security. It only takes a brief Google search to find a number of companies that have produced reports about ransomware attacks or giving more details about the techniques criminal gangs or teams associated with foreign governments are using to attack organizations. There are also plenty of reports about the cost of these attacks to more high-profile organizations. I don’t just mean the cost of new hardware, software, or staff, I mean fines for non-compliance with regulations, and court costs and fines paid to individuals whose data has been stolen.

I would have thought those kinds of stories would have crossed the desk of an organization’s chief financial officer (CFO) as well as anyone associated with IT. Admittedly, the majority of attacks are on non-mainframe platforms, but that doesn’t mean mainframes aren’t targets for attacks because, as we know, they contain a large amount of data about people and finances.

I would like to see AI-based software able to be as effective as the best non-AI security software. And then I would like to see the AI software learn and improve. As I’ve mentioned previously, the security software needs to be trained to recognize ‘normal’ activity by people who have access to the mainframe, and then automatically suspend any unusual actions by them. This prevents too much damage being done, if it a job being run by malware rather than a real person. If the person is authorized, then appropriate checks by the security team can allow the job to continue.

Because malware attacks get more sophisticated each year, it’s important to have some kind of defence shield that can learn an adapt and continue to keep the mainframe safe. I’m surprised that we haven’t had security software listed as an important area for AI development. I assume it must be because it’s not such an easy thing to do as some of the other areas listed in the IBM article.

Sunday, 16 February 2025

Trevor Eddolls – IBM Champion 2025

The iTech-Ed Group is pleased to announce that Trevor Eddolls, its Chair, has been recognized by IBM as an IBM Champion for 2025. Trevor was first awarded IBM Champion status in 2009.

A blue square with white text

AI-generated content may be incorrect.


 

IBM said: “On behalf of IBM, it is my great pleasure to recognize you as a returning IBM Champion in 2025. Congratulations!

“We would like to thank you for your continued leadership and contributions to the IBM technology community. This recognition is awarded based on your renewal and contributions for the 2024 calendar year. The IBM Champion designation is a 1-year term, and may be renewed by IBM annually, provided you demonstrate continued community engagement and contributions. You may also have earned 2024 IBM Rising Champion advocacy badges (IBM Contributor, Advocate, and Influencer) on your way to this honour. Your IBM Champion status renews now and will run through December 2025.”

Trevor Eddolls, Chair of iTech-Ed Ltd said: “I think it's really important in these days of multiple computing platforms being available that people share information with others about the positive contributions mainframes make to the world of IT. And I'm proud that my efforts have been recognized again this year by IBM. I think the Champion programme is a very positive way for IBM to recognize people around the world who help to promote its products and share their skills in using them.”

According to IBM: “The IBM Champion program recognizes these innovative thought leaders in the technical community and rewards these contributions by amplifying their voice and increasing their sphere of influence. IBM Champions are enthusiasts and advocates: IT professionals, business leaders, developers, executives, educators, and influencers who support and mentor others to help them get the most out of IBM software, solutions, and services.”

So why is iTech-Ed Group’s Trevor Eddolls an IBM Champion? Well, he doesn’t work for IBM, but he does write about mainframe hardware and software. You can read his articles here. He has also written articles for the TechChannel website, and often blogs on the Planet Mainframe website. Trevor has spoken at the GSE UK regional conference for the past few years. In 2024, he was talking about how to create artificial generalized intelligence (AGI). He has been Editorial Director for the well-respected Arcati Mainframe Yearbook (renamed the Arcati Mainframe Navigator in 2025). And Trevor Eddolls was the chair of the Virtual IMS, the Virtual CICS, and the Virtual Db2 user groups until recently. Their new website can be found at virtualusergroups.com. And this work has earned Trevor Eddolls the IBM Champion accolade for the past seventeen years.

Are IBM Champions compensated for their role? No. Do IBM Champions have any obligations to IBM? Again, the answer is no. The title recognizes their past contributions to the community only over the previous 12 months. Do IBM Champions have any formal relationship with IBM? No. IBM Champions don’t formally represent IBM, nor do they speak on behalf of IBM.

But it’s not all one-sided! There are regular IBM Champions calls, where IBM and Champions share relevant information on a range of topics. IBM Champions also receive merchandise customized with the IBM Champion logo. And IBM Champions receive visibility, recognition, and networking opportunities at IBM events and conferences; and special access to product development teams, and invitations and discounts to events and conference.

You can find more information about the Trevor and his work on X (Twitter), FacebookInstagram, and LinkedIn.

You can read Trevor's IBM Champion profile here.

You can find out more about iTech-Ed here.

 

Sunday, 2 February 2025

Mainframe staff, security, and AI

If you want to test a new application, the best data to test it on is live data! Now, I’m sure that there are procedures in place to not do that. I’m sure anonymized data would be used instead. But it became apparent a few years ago that some members of staff were copying live data off the mainframe and testing it in cloud applications. Again, hopefully this doesn’t happen anymore. However, there is apparently a new problem facing mainframe security teams. And that is using live data on artificial intelligence (AI) applications.

It was the rapid increase in people working from home during the pandemic that led to a rise in shadow IT – people using applications to get work done, but those applications hadn’t been tested by the IT security team. A recent survey has found that AI is now giving rise to another massive security issue. This becomes even more of an issue with the current popularity of Deepseek V3, and the announcement of Alibaba Qwen 2.5, both AIs originating from China.

Cybsafe’s The Annual Cybersecurity Attitudes and Behaviors Report 2024-2025 found that, worryingly, almost 2 in 5 (38%) professionals have admitted to sharing personal data with AI platforms, without their employer’s permission. So, what data is being shared most often and what are the implications? That’s what application security SaaS company, Indusface, looked into. Here’s what they found.

One of the most common categories of information shared with AI is work-related files and documents. Over 80% of professionals in Fortune 500 enterprises use AI tools, such as ChatGPT, to assist with tasks such as analysing numbers, refining emails, reports, and presentations.2

However, 11% of the data employees paste into ChatGPT is strictly confidential, for example internal business strategies, and the employees don’t fully understand how the platform processes this data. Staff should remove sensitive data when inputting search commands into AI tools.3

Personal details such as names, addresses, and contact information are often being shared with AI tools daily. Shockingly, 30% of professionals believe that protecting their personal data isn’t worth the effort, which indicates a growing sense of helplessness and lack of training.

Access to cybersecurity training has increased for the first time in four years, with 1 in 3 (33%) participants using it and 11% having access but not utilizing it. For businesses to remain safe from cyber security threats, it is important to carry out cybersecurity training for staff, upskilling on the safe use of AI.1

Client information, including data that may fall under regulatory or confidentiality requirements, is often being shared with AI by professionals.

For business owners or managers using AI for employee information, it is important to be wary of sharing bank account details, payroll, addresses, or even performance reviews because this can violate contract policy and lead to organization vulnerability due to any potential legal actions if sensitive employee data is leaked.

Large language models (LLMs) are often used and are crucial AI models for many generative AI applications, such as virtual assistants and conversational AI chatbots. This can often be used with Open AI models, Google Cloud AI, and many more.

However, the data that helps train LLMs is usually sourced by web crawlers scraping and collecting information from websites. This data is often obtained without users’ consent and might contain personally identifiable information (PII).

Other AI systems that deliver tailored customer experiences might collect personal data, too. It is recommended to ensure that the devices used when interacting with LLMs are secure, with full antivirus protection to safeguard information before it is shared, especially when dealing with sensitive business financial information.

AI models are designed to provide insights, but not safely secure passwords, and could result in unintended exposure, especially if the platform does not have strict privacy and security measures.

Indusface recommends that individuals avoid reusing passwords that may have been used across multiple sites because this could lead to a breach on multiple accounts. The importance of using strong passwords with multiple symbols and numbers has never been more important, in addition to activating two-factor identification to secure accounts and mitigate the risk of cyberattacks.

Developers and employees increasingly turn to AI for coding assistance, however sharing company codebases can pose a major security risk because it is a business’s core intellectual property. If proprietary source code is pasted into AI platforms, it may be stored, processed, or even used to train future AI models, potentially exposing trade secrets to external entities.

Businesses should, therefore, implement strict AI usage policies to ensure sensitive code remains protected and never shared externally. Additionally, using self-hosted AI models or secure, company-approved AI tools can help mitigate the risks of leaking intellectual property.

 

 

The sources given for their research are:

  1. Cybsafe | The Annual CybersecurityAttitudes and Behaviors Report 2024-2025
  2. Masterofcode | MOCG Picks: 10 ChatGPT Statistics Every Business Leader Should Know
  3. CyberHaven | 11% of data employees paste into ChatGPT is confidential