Sunday, 3 September 2023

Using AI for business success


Let’s jump forward in time, say five years, where AI is everywhere. What does our mainframe world look like? Let’s start with programming. Working programs are being completed by the next day! A programmer simply says to the programming AI what the program needs to achieve and what language the program should be written in, and within minutes (seconds for simpler programs) the program is written and compiled. The programmer then tests it and deploys it.

Reports still need to be produced. You simply tell the AI what it needs to report on, and the AI writes it for you. A quick check on the content is all that’s needed before it’s distributed.

Rather than needing a security team to monitor the mainframe, cloud, and other platforms, the AI can do that. It can keep an eye on who is making any changes. It can send an alert if someone is working from an unusual IP address to make changes. It might even take action such as locking out the user or suspending a job. It can check whether people are accessing files that are not associated with their job role. And it can identify if user-ids and passwords suddenly become active after a period of dormancy. It will protect your platforms against attempts to breach security.

It will make life so much simpler – or, in many ways, deskill the work that is being done. So, should we start working towards this computing nirvana now? As with most things, the dream and the reality are not the same.

While there are plenty of AI projects out there at the moment, and I’ve written about IBM’s AI work previously, it’s probably ChatGPT that most people are familiar with. ChatGPT is an example of a Large Language Model (LLM). It’s a bit like the predictive text you use when messaging someone on your phone. ChatGPT has been trained on a massive amount of data. It can then summarize what it knows and generate text (including poetry!). You may see the term generative AI to describe the ability to generate text.

While ChatGPT is very clever and very useful, and, I must say, quite fun to use, organizations may find that it’s not quite as clever as they thought. Rather than doing their own research, some people may, for reasons of speed, rely on LLMs to give them an answer to a problem they are facing. However, because of the size of the dataset that the LLM was trained on, it may contain conflicting information. It may also not contain the most recent information available. As a consequence, the seemingly all-knowing LLM may not be quite giving you the best advice. In addition, depending on the question, there may be factors that the LLM can’t take into account.

Another issue faced at some organizations is when some members of staff are using ChatGPT to get information or solve problems etc. Sometimes, the information that they are giving the LLM may be covered by non-disclosure agreements, or be proprietary (eg getting the LLM to debug some code), or company confidential. They consider talking to the AI from their laptop is a bit like talking to a doctor or lawyer, and the information they give goes no further. What usually happens is that the data entered become part of the LLM’s training data. And that means it may appear in the answers given to your organization’s competitors. With GDPR, CCPA, and similar regulations, if someone were to enter personally identifiable information (PII), that might lead to quite hefty fines.

Three things immediately come out of this. Firstly, all staff at an organization need to be made aware of the dangers of entering company confidential information into an LLM or violating privacy laws. The second thing is the need for companies to draft an LLM use policy and implement it.

Thirdly, there is the ongoing issue of shadow IT. With a mainframe, you pretty much know what applications people are using. The issue comes with groups withing an organization making use of applications available in the cloud or using apps on their phones that IT hasn’t checked for security. For any organization, there may well be individual members of staff or whole teams that are using ChatGPT or similar LLMs for their work. Because of the issues mentioned above, this can lead to loss of data or fines, etc.

Highlighting the concerns that people have with AI, just recently, following the publication of the National Risk Register (NRR) 2023, AI has been officially classed as a security threat. The document details the various threats that could significantly impact the UK’s safety, security, or critical systems at a national level. AI is now described as a ‘chronic risk’, highlighting the long-term threat that it poses.

AI is already with us. It is really useful, and my IT nirvana prediction is probably true. It’s just getting there from here is going to be a bumpy road. If you like metaphors, we’re in the situation where we’ve just given our teenager the car keys to our new vehicle.

 

 

No comments: