Mainframes are just full of COBOL programs. To a large extent, many large businesses are reliant on their COBOL programs to keep them in business. That’s not to say that mainframe technology is somehow locked into the 1980s or even earlier, it just shows how long successful coding lasts, and these applications may well have been in use for many years.
The fact is that there are estimated to be between 200 and 250 billion lines of COBOL code in production. It’s also been estimated that there are 1.5 billion new lines of COBOL written each year. And yet most educational establishments are teaching young people other languages, which will give them a job in the overcrowded and highly competitive world of gaming and mobile application.
As a consequence, there aren’t a lot of new people out there who can write COBOL programs quickly and easily. And the number of older programmers with COBOL skills is decreasing all the time as those programmers reach pensionable age and decide to pursue other interests, eg golf!
The good thing about COBOL is that it uses an English-like syntax to describe almost everything in the program, which means that it’s verbose, ie quite long compared to other languages, and it’s almost understandable, so it doesn’t need too many comments. However, that is only true if you can find the source code rather than the compiled code. And that might be more easily said than done!
So, what can you do, if you would like to make changes to a COBOL program that has, perhaps been running since 1992, but now doesn’t quite meet your needs? Let’s assume it’s not one of your absolute core business programs running in CICS or IMS, but let’s say that it is very useful to the organization. What are your choices?
If you do have the original source code, and if you can find a COBOL programmer, he might be able to rewrite it or add new functionality to it. Those are two big ‘ifs’.
An alternative is to use a software package that will translate the application into a modern language. Unfortunately, this all too often ends up with something that is almost impossible to maintain.
Or you can get your younger programmers to look at the application and write something in their language of choice that will do the same thing, plus they can add the updates that you require. Unfortunately, mainframe programs have any number of dependencies, and require certain entries in files to be treated in certain specific ways, which can make this rebuild very difficult.
And none of those ideas can be described as ‘the easy way’.
What is needed is an easy-to-use deterministic machine learning platform where knowledgeable personnel can teach the platform what they wish to achieve. The teaching approach also has the advantage of extracting tacit knowledge, thus filling in often missed but critical details.
This platform needs to be able to build a model of the existing system by obtaining specific relevant data samples from it. In addition, users should also be able to teach the system using real world examples.
Once the new program has all the information, it can be tested. Various scenarios can be set up and the results from the new program can be compared to the existing program. Where there are any differences, the platform needs to be able to be taught, so it can be changed to satisfy these new scenarios. It can also be tested to ensure that everything else is still running correctly.
Basically, you need a platform that uses Machine Learning (ML) to build the behaviour of the final system through a deterministic process of inductive reasoning. Consistency of logic and behaviour can always be maintained by letting the platform ‘argue’ back by flagging all logical inconsistencies. Alternatively, ‘bugs’ could optionally be added for testing purposes. Using machine learning would cut down greatly on the amount of testing that would otherwise be required.
In practice, the platform user (the application builder) has to ‘explain’ to the system what they want to occur at a PROCESS and DATA level. Data can be visualized as knowledge graphs. Wikipedia tells us that a knowledge graph is a “knowledge base that uses a graph-structured data model or topology to integrate data”. Concrete examples are used, which are usually derived from the real world, but could also be synthetic. These examples are remembered AS TESTS, at build time and for the future. This will reduce the risk of future maintenance, allowing future behaviour to be adapted whenever that is required. Once the new application has been created, it can be deployed.
An example of such as platform that I recently came across and you may not have heard of yet is RulEvolution. It uses a firewalled production environment so as not to interfere with any other work going on in the mainframe environment,
According to RulEvolution, its platform achieves a far greater ROI because it provides a much quicker time to market as a result of its speedier build time and because most testing is already done. It also has a much lower cost, much less than any traditional approach (or any other approach!) to custom software building. In addition, it does what you want it to from the start. Plus, the system can be continually refined, and the scenarios serve as a basis for documentation. They also say that it’s easy to safely make changes, allowing businesses to adapt quickly. Lastly, it also provides reduced longer-term maintenance.
The platform is taught what you want to achieve by example. The system remembers the learned scenarios, and system alterations are played back to check they’re not broken. These act as the tests (and are remembered). The system focuses on the data manipulation behaviour rather than using’ ‘rules’ (that are then interpreted). This is done as pattern matching to the data, and encapsulated as a model for quick and easy deployment
It certainly seems like an easy way to update those COBOL programs that are reaching their ‘best-before’ date.
No comments:
Post a Comment