Sunday 24 September 2023

What can I use mainframe-based AI for?

It’s a funny old world. On the one hand you have people talking about AI replacing just about everyone and doing their jobs faster and more accurately. And, on the other hand, you have people talking about how just about everyone over 50 is leaving full-time employment and taking up more fulfilling occupations. Or else they are talking about just how boring their job is. It seems to me that the initial focus of AI should be on being able to do the work that no-one really wants to do – don’t you think?

The good news, in that respect, is that IBM’s much publicized watsonx enterprise AI platform is doing just that. Now available is the watsonx Code Assistant for Z. It’s built on a 20 billion parameter foundation AI model, and was trained on 1.5 trillion tokens of data, and is aware of 115 coding languages.

The problem that is being addressed is the fact that the majority of business applications running on a mainframe were written in COBOL, whereas the majority of programs running on cloud-based platforms are written in some other, more modern language, eg Java. For those mainframe sites seduced by the term modernization (who said mainframes aren’t modern?) and wanting to move their business applications into the cloud, they want to rewrite those existing COBOL applications. The problem is that very very few programmers view rewriting large pieces of COBOL applications in Java as anything more than a poisoned chalice. It’s not something they would view as an interesting or pleasurable way to spend their time. Things are made even worse by the fact the original program code – with any original documentation – has probably long ago disappeared.

That’s where Code Assistant for Z comes in. It’s a generative AI product built on the watsonx enterprise AI platform, which can help developers translate mainframe COBOL applications into Java. IBM's selling point is that the AI tool offers improved testing, faster re-writing of functionality into Java, and lower costs associated with updating the old COBOL code. Code Assistant provides automated testing processes as well, and can be used in each step of the ‘modernization’ process, converting the existing COBOL code to Java.

In addition to watsonx Code Assistant for Z, IBM previously announced Code Assistant for Red Hat Ansible Lightspeed, and plans to launch new product-focused versions to address other languages and improve time-to-value for modernization. IBM is also saying that the products will address the shortage of skilled developers that currently exists.

I talked about the latest Cost of a Data Breach Report from IBM Security back in August. I thought I’d just highlight what that report said about AI. Before we look at that, just a reminder that the survey found that the length of time it takes to identify a breach is 204 days, and once a breach has been identified it takes an organization, on average, 73 days to recover. The worldwide average cost of a data breach is US$4.45 million.

The report makes a strong case for the use of security AI, saying that organizations with extensive use of security AI and automation identified and contained a data breach 108 days faster than organizations that didn’t use AI or automation. What falls into that category includes the use of AI, machine learning, automation, and orchestration to augment or replace human intervention in detection and investigation of threats as well as the response and containment process. On the opposite end of the spectrum are processes driven by manual inputs, often across dozens of tools and complex, non-integrated systems, without data shared between them.

In addition, there were cost savings with AI and automation. The report found a US$1.76 million lower data breach costs compared to organizations that didn’t use security AI and automation capabilities.

There’s a lot of talk about the downside of AI, in fact, I’ve even written about it, but like all technologies that revolutionize the way people work and live, it has a positive side and a negative side. All organizations need to have an AI policy in place to ensure that employees are not using AI in any way that could harm the company. And, there is the possibility that AI could take away people’s jobs. I like to think, like the introduction of PCs into organizations which took away the typing pool and many office secretary jobs, it will also create new types of job, and the overall number of jobs available will actually increase.

Also looking on the positive side, things like Code Assistant are able to do jobs where there is a shortage of people who would otherwise be available to do it (ie developers), and it will do work that most programmers would rather not have to do (ie rewrite COBOL programs in Java).

The whole Code Assistant for Z approach by IBM seems a great step forward in the use of safe AI.

Sunday 17 September 2023

I think I know about AI, but what is actual intelligence?

So many people are talking about Artificial Intelligence, I thought it would be useful to see what psychologists think about natural intelligence. It’s a term that we all think we know the meaning of, but what is it that those people working in that field would be able to identify or recognize as intelligence?

Like all good philosophy essays, let’s start with a definition of intelligence. Intelligence is: “the ability to acquire and apply knowledge and skills”. Or: “the ability to solve complex problems or make decisions with outcomes benefiting the actor”. Or: “the capacity or ability to acquire, apprehend, and apply knowledge in a behavioural context”. So, we’re looking at acquiring information and applying that information.

It’s also been suggested that intelligence gives humans the cognitive abilities to learn, form concepts, understand, and reason, including the capacities to recognize patterns, innovate, plan, solve problems, and employ language to communicate.

In psychology, there have been various theories about what intelligence actually is and attempts to measure it.

Psychometric theories treat intelligence as a composite of abilities measured by mental tests that measure reasoning ability and memory. Tests can be given to people and a numerical result can be produced.

Spearman found that people who do well on one type of test generally do well on other types. Using factor analysis, he suggested there were two kinds of factor underlying all differences in test scores. Firstly, there was a general factor, which he labelled ‘g’, and a second factor specifically related to the type of task.

On the other hand, Thurstone proposed seven ‘primary mental abilities’. They were: verbal comprehension; verbal fluency; number; spatial visualization; inductive reasoning; memory; and perceptual speed.

Vernon and Cattell modelled intellectual abilities as hierarchical, with g (general ability) at the top, and specific abilities below. Cattell also suggested that general ability can be subdivided into ‘fluid’ and ‘crystallized’, where fluid abilities are the reasoning and problem-solving abilities measured by tests, and crystallized abilities include vocabulary, general information, and knowledge about specific fields.

Cognitive psychologists didn’t go along with these ideas, they thought that it was important to understand the processes underlying intelligence. They assumed that intelligence comprises mental representations (such as propositions or images) of information and processes that can operate on such representations.

Hunt, Frost, and Lunneborg suggested that basic cognitive processes are the building blocks of intelligence.

Many of the experiments assumed that humans processed information sequentially or serially. However, they may well process information in chunks and in parallel. And there may be cultural differences.

Cognitive-contextual theories looked at how cognitive processes operate in various settings. Gardner proposed a theory of ‘multiple intelligences’, including linguistic, logical-mathematical, spatial, musical, bodily-kinaesthetic, interpersonal, and intrapersonal intelligence.

Sternberg proposed a ‘triarchic’ theory. He thought that musical and bodily-kinaesthetic abilities were talents rather than intelligences. Sternberg’s three integrated and interdependent aspects of intelligence were: practical (the ability to get along in different contexts), creative (the ability to come up with new ideas), and analytical (the ability to evaluate information and solve problems).

Biological theories of intelligence suggest that understanding intelligence is only possible by identifying its biological basis. It’s looking at neurons.

There have been studies of different areas of the brain. For example, Levy and others found that the left hemisphere is superior in analytical tasks, such as are involved in the use of language, while the right hemisphere is superior in many forms of visual and spatial tasks. Overall, the right hemisphere tends to be more synthetic and holistic in its functioning than the left. Remember that the corpus callosum links the two halves of the brain, so work was done on patients whose corpus callosum had been severed. Levy and Sperry found that the left hemisphere of the brain functioned better with patterns that are readily described in words but are difficult to discriminate visually. Whereas the right hemisphere was more adept with patterns requiring visual discrimination.

Eysenck and others looked at brain waves and speed of response in people taking intelligence tests.  Some researchers found a relationship between brain waves, and scores on a standard psychometric test of intelligence.

Others have looked at blood flow in the brain, which indicates which areas of the brain are being used. Haier found that people who perform better on conventional intelligence tests often show less activation in relevant portions of the brain than do those who perform less well.

There have been a number of studies of children showing how intelligence develops. The outstanding researcher in this area is Piaget. His four stages of cognitive development were: sensorimotor intelligence; preoperational thinking; concrete operational thinking; and formal operational thinking.

You might ask what impact environment has on intelligence? It does seem that intelligence runs in families, and according to Plomin, “recent genome-wide association studies have successfully identified inherited genome sequence differences that account for 20% of the 50% heritability of intelligence”. However, there is no single gene for intelligence. And, we know from epigenetics that through a process called methylation, genes can be turned on or off. DNA methylation is influenced by diet, exercise, stress, relationships, thoughts, nutritional status, toxins, sleep, infections, etc. So, yes, environment can affect a person’s intelligence.

What do intelligence (IQ) tests measure? Because there is no complete definition of intelligence, it would seem that IQ tests simply measure what IQ tests measure!

It’s worth noting that people also talk about Emotional Intelligence (EI), and there are EQ tests to measure how emotionally intelligent a person is. Emotional Intelligence is usually described as a person’s ability to perceive, use, understand, manage, and handle emotions. The name most associated with EI is Daniel Goleman. Some research has found that people with high EI have greater mental health, job performance, and leadership skills. There are debates about whether EI is really a form of intelligence or something else.

What can AI developers learn from natural human intelligence? That’s difficult to answer. Clearly people like Thurstone, Cattell, and Gardner (and many others) came up with lists of things that make up human intelligence and these lists might be helpful for AI developers. However, like in all things, there have been many different approaches to the question of what is intelligence? What might be of interest is an idea from Carol Dweck in her book called Mindset: The New Psychology of Success. The book suggests that some people believe their success is based on innate ability; these people are said to have a ‘fixed’ theory of intelligence (ie a fixed mindset). Other people believe their success is based on hard work, learning, training, and doggedness; these are said to have a ‘growth’ or an ‘incremental’ theory of intelligence (growth mindset). And everyone else is meant to be somewhere on a continuum between the two extremes. Perhaps the one thing to takeaway is that a successful AI needs to have a growth mindset.

 

Sunday 10 September 2023

Mainframes and the world of cyber-crime

Mainframe security is a concern for all mainframe-using organizations. However, many people working with mainframes are unaware of just how professional hacker groups are. Some people still have the idea that a hacker is some kind of disaffected teenager who plays at accessing corporate data. The truth is that hacker gangs are using the same tactics, techniques, and procedures (TTPs) as legitimate businesses. Plus, they are able to offer various hacking techniques as-a-Service. This means almost anyone can use them to hack your mainframe.

Cyber-crime allows criminal gangs to make huge profits, which means that there has been a massive growth in their activity and the underground marketplace where products and services can be bought and sold. Most IT people are familiar with the idea of Ransomware-as-a-Service (RaaS), but not all bad actors are looking to steal sensitive data, which they can sell, or extort money through ransomware attacks. Some bad actors are looking to steal processing power in order to mine cryptocurrency. Some simply gain access to a corporate network and sell that access to others. These are called initial access brokers (IABs). You can also find Crypter-as-a-Service (CaaS) and Malware-as-a-Service (MaaS) available to purchase.

Any IT security team has to be prepared for hackers, or more likely hacking gangs, to use encrypted anonymous routing tools (eg tor and I2P). When the gangs extract money from an organization, it can be hard to trace because of the use of cryptocurrency. On top of that, there are state-sponsored actors, who are not necessarily in business for the money, but are politically motivated in their attacks.

With the Ransomware-as-a-Service model, a hacker gang will create ransomware tools, infrastructure, and operating procedures or playbooks. Other people or gangs can then pay to access these tools etc and then carry out a ransomware attack. It’s a bit like shopping. The purchasers may use RaaS tools from multiple gangs, and what they do with them can vary. This makes identification harder because the users will have different TTPs. The benefit of this model for the users is that they have a tried and tested way to make money from legitimate organizations. The benefit for the gang that created the RaaS is that they can be making money without getting out of bed in the morning!

One of the first thing that hacker gangs do, is create backdoors into the network that they have just hacked. That means they can easily get back into that network in the future. It also means that they could sell this access to other people. Initial access brokers (IABs) make their money by selling access to victim networks on the dark web. They will have spent time gaining access in the first place, whereas the purchasers don’t need to spend any time before they start to attack their chosen target. Preferred methods of gaining access to an organization include: compromised emails; cloud misconfigurations; and software supply chain attacks.

Before we look at Crypter-as-a-Service, it’s useful to understand the three stages in a typical malware attack. Stage 1 is the dropper. This is the initial malicious file/command that retrieves the crypter. In stage 2, the crypter, which is a tool or process, obfuscates the malware payload so it can bypass the defences on the network. Stage 3 is the malware that supplies the functionality required by the attacker. This is typically some kind of remote administration. Antivirus and antimalware software is used to prevent crypters getting on to a network, and both sides are regularly updating their software in an unending battle.

Crypter-as-a-Service (CaaS) provides the latest generation of software tools and services would-be hackers can include in their workflows, without them needing to be completely up to date with the latest requirements. These bad actors may also, at the same time, purchase Malware-as-a-Service (MaaS).

Malware needs to be kept up to date to avoid detection, which makes Malware-as-a-Service such a popular purchase for less technically adept hackers. They may also purchase support contracts, access to updates, and affiliated services.

Obviously, these techniques are used on non-mainframe platforms. The reason that mainframers need to be aware of them is that mainframes are no longer separate islands of computing. They are increasingly being connected to the cloud, and the latest Cost of a Data Breach Report from IBM Security found that 82% of breaches involved data stored in the cloud – public, private, or multiple environments. Mainframe sites that have projects that embrace cloud computing may well wish to review their security policy for the cloud. The report goes on to say that 39% of breaches spanned multiple environments. In addition, mainframe APIs are often connected to mobile devices and the web. It makes reviewing the potential attack surface an important job.

What I’m suggesting is that mainframe breaches could be started from a different platform within an organization and then moved onto the mainframe. Criminal gangs, state sponsored actors, and even disgruntled staff could now be taking steps (if they haven’t already) to access your mainframe data.

You can read more in The Professionalization of Cyber Crime whitepaper from WithSecure.

Sunday 3 September 2023

Using AI for business success


Let’s jump forward in time, say five years, where AI is everywhere. What does our mainframe world look like? Let’s start with programming. Working programs are being completed by the next day! A programmer simply says to the programming AI what the program needs to achieve and what language the program should be written in, and within minutes (seconds for simpler programs) the program is written and compiled. The programmer then tests it and deploys it.

Reports still need to be produced. You simply tell the AI what it needs to report on, and the AI writes it for you. A quick check on the content is all that’s needed before it’s distributed.

Rather than needing a security team to monitor the mainframe, cloud, and other platforms, the AI can do that. It can keep an eye on who is making any changes. It can send an alert if someone is working from an unusual IP address to make changes. It might even take action such as locking out the user or suspending a job. It can check whether people are accessing files that are not associated with their job role. And it can identify if user-ids and passwords suddenly become active after a period of dormancy. It will protect your platforms against attempts to breach security.

It will make life so much simpler – or, in many ways, deskill the work that is being done. So, should we start working towards this computing nirvana now? As with most things, the dream and the reality are not the same.

While there are plenty of AI projects out there at the moment, and I’ve written about IBM’s AI work previously, it’s probably ChatGPT that most people are familiar with. ChatGPT is an example of a Large Language Model (LLM). It’s a bit like the predictive text you use when messaging someone on your phone. ChatGPT has been trained on a massive amount of data. It can then summarize what it knows and generate text (including poetry!). You may see the term generative AI to describe the ability to generate text.

While ChatGPT is very clever and very useful, and, I must say, quite fun to use, organizations may find that it’s not quite as clever as they thought. Rather than doing their own research, some people may, for reasons of speed, rely on LLMs to give them an answer to a problem they are facing. However, because of the size of the dataset that the LLM was trained on, it may contain conflicting information. It may also not contain the most recent information available. As a consequence, the seemingly all-knowing LLM may not be quite giving you the best advice. In addition, depending on the question, there may be factors that the LLM can’t take into account.

Another issue faced at some organizations is when some members of staff are using ChatGPT to get information or solve problems etc. Sometimes, the information that they are giving the LLM may be covered by non-disclosure agreements, or be proprietary (eg getting the LLM to debug some code), or company confidential. They consider talking to the AI from their laptop is a bit like talking to a doctor or lawyer, and the information they give goes no further. What usually happens is that the data entered become part of the LLM’s training data. And that means it may appear in the answers given to your organization’s competitors. With GDPR, CCPA, and similar regulations, if someone were to enter personally identifiable information (PII), that might lead to quite hefty fines.

Three things immediately come out of this. Firstly, all staff at an organization need to be made aware of the dangers of entering company confidential information into an LLM or violating privacy laws. The second thing is the need for companies to draft an LLM use policy and implement it.

Thirdly, there is the ongoing issue of shadow IT. With a mainframe, you pretty much know what applications people are using. The issue comes with groups withing an organization making use of applications available in the cloud or using apps on their phones that IT hasn’t checked for security. For any organization, there may well be individual members of staff or whole teams that are using ChatGPT or similar LLMs for their work. Because of the issues mentioned above, this can lead to loss of data or fines, etc.

Highlighting the concerns that people have with AI, just recently, following the publication of the National Risk Register (NRR) 2023, AI has been officially classed as a security threat. The document details the various threats that could significantly impact the UK’s safety, security, or critical systems at a national level. AI is now described as a ‘chronic risk’, highlighting the long-term threat that it poses.

AI is already with us. It is really useful, and my IT nirvana prediction is probably true. It’s just getting there from here is going to be a bumpy road. If you like metaphors, we’re in the situation where we’ve just given our teenager the car keys to our new vehicle.