Sunday, 19 November 2023

IBM’s solution to AI storage problems

With everyone talking about AI these days, IBM announced, at the end of October, a storage solution to the problem of where to put your data intensive and AI workload demands. Drum roll please, we have the new IBM Storage Scale System 6000, a cloud-scale global data platform.

The product has an enhanced high-performance parallel file system designed for data intensive use-cases. It provides up to 7M Input/output operations per second (IOPs) and up to 256GB/s throughput for read-only workloads per system in a 4U (four rack units) footprint.

Denis Kennelly, general manager, IBM Storage, is quoted in the press release as saying, “The potential of today’s new era of AI can only be fully realized, in my opinion, if organizations have a strategy to unify data from multiple sources in near real-time without creating numerous copies of data and going through constant iterations of data ingest. IBM Storage Scale System 6000 gives clients the ability to do just that – brings together data from core, edge, and cloud into a single platform with optimized performance for GPU (graphics processing unit) workloads.”

We’re told that the IBM Storage Scale System 6000 is optimized for storing semi-structured and unstructured data including video, imagery, text, instrumentation data, etc, that is generated daily and accelerates an organization’s digital footprint across hybrid environments. With the IBM Storage Scale System clients can expect greater data efficiencies and economies of scale with the addition of IBM FlashCore Modules (FCM), to be incorporated in the first half of 2024:

·        New maximum capacity nonvolatile memory express (NVMe) FCM will provide capacity efficiency with 70% lower cost and 53% less energy per TB compared with IBM’s previous maximum capacity flash drives for IBM Storage Scale System. This can help clients realize the full performance of NVMe with the cost advantages of Quad-level Cell (QLC).

·        Powerful inline hardware-accelerated data compression and encryption to help keep client data secured even in multi-user, multi-tenant environments.

·        Storage Scale System 6000 with FCM will support 2.5 times the amount of data in the same floor space than the previous generation system.

In addition, clients can accelerate the adoption and operationalization (is there such a word?) of AI workloads with IBM watsonx:

·        Engineered with a new NVMe over Fabrics (NVMeoF) turbo tier, new parallel multi-tenant data isolation and IBM patented computational storage drives, this is designed to provide more performance security and efficiency for AI workloads.

·        Storage Scale software, the global data platform for unstructured data that powers the Scale System 6000, connects data with an open ecosystem of multi-vendor storage options including AWS, Azure, IBM Cloud, and other public clouds, in addition to IBM Storage Tape.

Lastly, they say, clients can gain faster access to data with over 2.5 times the GB/s throughput and double IOPs performance of market leading competitors. It provides high-processing throughput and access speed with multiple concurrent AI and data-intensive workloads that can be run to meet a range of use cases.

IBM certainly seems to have set the bar quite high for its competitors to try to beat, while at the same time offering a product that is going to be useful to organizations looking to increase their use of AI and needing the capacity and speed to do so.

Sunday, 12 November 2023

GSE UK Conference 2023 – from my point of view

The Guide Share Europe (GSE) UK Annual Conference ran from lunch time Monday 30 October until late afternoon on Thursday 2 November. It was held at Whittlebury Hall, Whittlebury, Near Towcester, Northamptonshire NN12 8QH, UK. This year’s strapline was “Where Technology and Talent meet Tomorrow”. And it was brilliant.

There were over 610 delegates, which must have been a record. And there were 278 sessions across 18 streams including the new Artificial Intelligence (AI) stream, as well as: 101 New to Mainframe, 102 New’ish to Mainframe, AppDev Application Development, CICS Transaction Processing, Db2 Relational Database, IMS, WIT Women in IT, Large Systems z/OS, z/VM, Linux on z, Mainframe Skills & Learning, MQ Messaging, Networks Communications, New Technologies, Security Securing Mainframes, Storage Management Disks, Tapes, Systems Management Tools for managing systems, zP&C zSystems Performance & Capacity Management, Systems Management Tools for managing systems, Code-a-Thon Event.

I arrived slightly later than planned due to an accident and later roadworks on the A34 into Oxford. And, just before registering, I spoke to Mark Wilson, who is the GSE UK Region Manager, and also Technical Director at Vertali. After that, I just had time to look round some of the exhibitors before dashing off to the first session I was attending. A quick break was followed by another session.

Lunch was nice, and gave more time to chat to exhibitors. I spoke to IBM Champion Matt Nation, Managing Director at Verhoef Training Ltd, which turned into a putting-the-world-to-rights session. I moved on to Fitz Software’s stand. Michael FitzGerald, MD at Fitz Software, quite rightly, wouldn’t let me start the whiskey tasting session because I was still eating. I agreed to come back later. I also stopped by the Action Software stand for a chat with Hugo Prittie, the CEO.

After lunch I went to a Security stream presentation. And, after that, I had a meeting – yes it was in the bar – with MainTegrity. That was followed by another session in the AI stream.

There were a number of exhibitors giving away T-shirts, and some giving away, socks, or caps. Interestingly, the PopUp Mainframe stand was giving away pants! It was there that I bumped into the always wonderful Resli Costabell, the award-winning international speaker, trainer, and coach. She was leading the Women in IT stream. It’s always a pleasure to chat to her.

At dinner, there were a few people in Halloween costumes. One person who came over to chat wearing their costume and face paint was Atul Bhovan, DevOps Solution Adviser with BMC Software. We agreed to meet the following day. I had dinner with IBM’s Anna Dawson, who chairs the system management streams. Tracey Dean, IBM Offering Manager: IMS and z/VM Management Software, came to join us for a chat. Tracey has spoken at a Virtual IMS user group meeting.

Wednesday started with me giving a presentation about the brain and what psychologists think intelligence is to the AI stream. It was well-received, and a number of people came up and said so afterwards. I was also asked to present it again at lunchtime to some people who couldn’t attend.

I then watched the next AI presentation before lunch. I bumped into IBM’s Joe Winchester on the stairs and chatted briefly. I also bumped into him in the bar – again a brief chat. Unfortunately, we never got to have a proper conversation about Zowe, open-source software, and anything else. Next time!

At lunchtime, I made sure to catch up with Andy McCandless, Presales Consultant with Beta Systems and am IBM Champion. He does a great mainframe-based newsletter, which is available on LinkedIn.

After lunch, I had a meeting with the Planet Mainframe people. There were lots of ideas bounced around by Andrew Armstrong and Amanda Hendley (whom people will know from the Virtual IMS, CICS, and Db2 user group meetings).

Then it was back to the AI stream for a game of Jeopardy led by IBM Champion Henri Kuiper.

Here’s photo of me enjoying the session.

I missed the final keynote and joined everyone for pre-dinner drinks. There I chatted to Herb Daly, Senior Lecturer in Computer Science at University of Wolverhampton and IBM Champion. He brought a number of students along to the conference. I also caught up with Tony Amies, Software Technical Director at Vertali. I’d missed his lunch-and-learn session.

I chatted to Darren Surch, CEO at Interskill Learning and a Lifetime IBM Champion. This is his photo. He said, “Honoured to spend time with the industry legend Trevor Eddolls. Priceless!”

It was a pleasure to meet Shari Chiara, Program Manager, IBM Champions and Community Advocacy – IBM Z and LinuxONE. All the IBM Champions that were around at that time had their photograph taken by a racing car.

Champions in the photo with Shari are: Mark Wilson, Henri Kuiper, Darren Surch, Trevor Eddolls, Matt Nation, Max Stern Dahl, Steven Perva, Larry Strickland, Andy McCandless, Tom Crocker, Wolfram Greis, Leendert Blondeel, Colin Knight, Philip Nelson, and Neale Ferguson.

My apologies to all the other people chatted to that I haven’t mentioned. And my thanks to all the people who took these photos that I have shamelessly stolen from LinkedIn and used.

IBM brought their Lego model of a mainframe. Here’s my photo with that.

And here I am standing by the real thing. 


 

 

Sunday, 5 November 2023

Ideas from philosophy and good AI

One of the ancient questions in philosophy is, “what do you have to do to be a good man, or to live a good life?”. At the moment, there are a number of meetings going on all over the world trying to decide about the ‘goodness’ of artificial intelligence (AI), and, much like the parents of a slightly wayward teenager, how AI can be kept on the straight and narrow until their teenager grows up.

The question we should really be asking is, “what makes something good?” But the answer to that is much more complex than it at first seems. To a child, a good parent is someone who lets them eat sweets, lets them stay up late, and lets them watch TV or play computer games all day. A ‘bad’ parent is someone who places limits on those activities and makes them learn their spellings, recite their tables, and read books. However, the adult version of that child may clearly disagree with the view of their younger self because they have failed to achieve all that they were capable of, and they are unhappy with how their life has turned out.

Another question to ask is whether the same decision or activity is always the right one in order to be a good person or to achieve a good outcome? You can add to that whether the same right or wrong decisions apply to all cultures at all times? Here’s an example. A man walks into a crowded room and starts firing a gun at the other people in the room. Is that a good thing to do? Hopefully, most people feel the answer is ‘no’, but want to know more information. How about if the room is full of people who are about to destroy the world – including you. Does that make the murders acceptable? In how many films or TV shows has murder been made acceptable because a ‘bad’ person has been stopped from doing harm. If I were writing an algorithm about when my AI could kill people, it would have to be quite a complicated one. The point I’m trying to make is that the rules we live by are quite complex and often unstated.

Let’s go back to the fourth century BCE. Socrates said that a good man does not concern himself with petty personal wants but only whether his actions are good and just. Although, of course, that hasn’t told us what is meant by good or just. However, it gives us a starting point for our AI.

Aristotle in the third century BCE suggested that a good man is the man who acts and lives virtuously and derives happiness from that virtue. He introduced the idea of virtue. I’ve not yet heard anyone talk about virtue in association with an AI.

Plato, who came between Socrates and Aristotle suggested four virtues, which were: prudence, fortitude, temperance, and justice. Aristotle muddied the waters a little by suggesting that a virtue can be defined as a point between a deficiency and an excess of a trait. The point of greatest virtue lies not in the exact middle, but at a golden mean sometimes closer to one extreme than the other. I say “muddied the waters” because that gets harder to code an algorithm or train an AI (or human) on.

Marcus Aurelius, the Stoic philosopher in the first century CE said, “Waste no more time arguing what a good man should be. Be one.” What he’s suggesting is that we’re all wasting our time discussing being good, we should lead by example and live a good life. I like the idea of just getting on and doing it. However, having done stuff all day, how can I know at the end of it whether I have been doing good or not?

Thomas Babington Macaulay in the 19th century came up with a quote that seems to apply to much AI research across the world, “The measure of a man's character is what he would do if he knew he would never be found out.” Or maybe I’m just a little cynical about people who are training Ais to hack mainframes? Perhaps people working on Ai are like the parents of teenagers and helping them to understand the need for kindness, honesty, courage, generosity, and integrity. These virtues can help to make the AI ‘good’. By cultivating virtues within the AI, we could, hopefully, shape its decisions.

Bertrand Russell, who died in 1970, said that the good life is one inspired by love and guided by knowledge. Clearly Ais are being fed lots of information – and, again hopefully, not too many alternative facts, but I have not sat through an AI presentation where someone mentioned the word ‘love’. It’s suggested that someone following Russell’s ideas will lead a good life with a deep sense of fulfilment. Ais don’t do feelings – unless you know better? No-one expects an AI to feel happy at the end of a day’s work.

Those working on Ais might do well to remember the words of Ralph Waldo Emerson in the 18th century. He said: “The purpose of life is not to be happy. It is to be useful, to be honourable, to be compassionate, to have it make some difference that you have lived and lived well.” Again, I don’t know whether the word ‘honourable’ is in the mind of people training Ais, but hopefully the AI is making some difference, in a positive way.

My thinking at the moment is that AI is neither good nor bad, it is only the use that people put it to that will make it seem either one or the other. Like every other invention, it will lead to change, but it will also lead to new jobs being created. I am sure that there will be an arms race as bad actors use Ai to attack mainframes and the good guys us AI to protect them. I am also uncertain whether legislation is going to be the most successful way to control AI. Large western governments will try this route because it’s the way they try to control everything else, but offshore development will continue whatever. What I am suggesting is that AI developers should look back at over 2000 years of philosophical thinking to decide what the right thing is to do when training the AI they are working on.