Monday 29 March 2010

Linux on mainframes – 10 years old already

It seems so long ago, back in 1999, when we were discussing the Y2K bug (that was going to bring the universe as we knew it to an end) and whether Linux on a laptop was ever going to scale to mainframe proportions. And, while at the time I said the Y2K bug was overhyped (but then we all said we said that afterwards!!), I was never overly sold on mainframe Linux – wondering what kind of strange hybrid creature it would be. Doh! Not only has mainframe Linux (Linux on System z) now celebrated 10 years of service, it is a hugely successful and growing part of the mainframe world.

In 1999 IBM announced its commitment to Linux and promised to make it available on all their platforms and also produce Linux versions of its software. This meant that IBM would be spending lots of money on Linux and needed it to be a success. I can remember talking to various people at conferences, and the general opinion was one of confusion. IBM sold operating systems, so why would it champion a free alternative? If Linux didn’t become a success then IBM would have wasted lots of money – which could have been spent improving its other products. Alternatively, if Linux did become a success, pundits wondered whether that would result in a drop in IBM’s software and hardware revenues. Conspiracy theorists suggested that IBM would cause Linux development to split into hundreds of variants, weakening its ability to succeed in the marketplace. It all seems so fanciful now!

IBM established the Linux Technology Center in 1999 and announced Linux on the mainframe in 2000. Several Linux distributions (distros as old Linux hands refer to them) now run on a mainframe, including SuSE and Red Hat. Other distributions that run on a mainframe include Debian, Gentoo, Slackware, and CentOS. There are also thousands of ISV applications that have been recompiled to run on mainframe Linux. Importantly, Linux is not emulated on a mainframe, it runs as a completely native operating system – like z/OS etc. However, it is most often found running under the king of virtualization, z/VM, allowing multiple Linux systems to run on one lot of hardware.

There is a specialty processor available for Linux – like zIIp and zAAP – called the Integrated Facility for Linux or IFL. This processor runs Linux work, rather than running it on the normal General Purpose Processor (GPP), and so saves the user money because they are only charged for work running through the GPP. Of course, there is the cost of buying (in reality turning on) the IFL specialty processor.

It used to be that every sentence about a mainframe contained a mention of SOA, but nowadays the word is “cloud”. The IBM Web site says: “Cloud computing promises operational efficiency and cost reduction due to the standardization, automation, and virtualization. High flexibility, scalability, and easy management can be provided by the virtual Linux server environment on System z. Which basically says that Linux on System z is a good thing to have in your cloud computing environment because it is an efficient cloud computing platform.

So, happy 10th birthday mainframe Linux (or Linux on System z as we should be calling it). There are apparently 3150 Linux applications enabled for System z and IBM claims that 70 percent of the top 100 global mainframe customers run Linux – so I confidently predict that we will be celebrating its 20th birthday.

Sunday 21 March 2010

Facebook and Twitter





The exciting news is that you can now stay up-to-date with all that’s happening in the IMS world, and with what’s happening with the Virtual IMS CONNECTION user group by following them on Twitter or becoming a Fan on Facebook – as well as by checking the Web site regularly at www.virtualims.com.










You could also add the Twitter and Facebook widgets to your Web sites, so not only do you stay in touch, but IMS professionals visiting your site will also be able to see the latest information. Examples of the widgets are shown below:

So, follow us on Twitter and become a fan on Facebook.

 



iTech-Ed Ltd is also on Twitter and Facebook. You can see all the latest blogs and other mainframe news on Twitter at twitter.com/t_eddolls. Or you can become a fan on Facebook by going to www.facebook.com/pages/iTech-Ed-Ltd/201736748040.

Sunday 14 March 2010

IMS logical relationships

The latest webinar from Virtual IMS CONNECTION was entitled, “The logical answer: IMS Logical Relationships”, and was presented by Aurora Emanuela Dell’Anno, an Engineering Services Architect with CA. Aurora is an IMS and DB2 Specialist Systems Engineer, with a past in application development. Aurora has spent the past 14 years specializing in various aspects of the IBM software world, with particular attention to database management and data warehousing. Keeping pace with the times, the past years have been dedicated to in-depth work on, and study of, data warehousing realities, trouble-shooting systems installations, and application performance issues – this included getting her hands dirty with distributed platforms, customers’ implementations, and problem-solving needs in this field, all from the perspective of systems engineering and database administration. Aurora works for CA in the highly-focused Mainframe Solutions Center.

Aurora started her presentation by explaining exactly what an hierarchical database was and what relational data structures were. She then asked why IMS needed logical relationships and suggested the reason was to access segments by a field other than the one chosen as the key, or to associate segments in two different databases or hierarchies. Aurora informed the user group that IMS provides two very useful tools to resolve these data requirements: secondary indexes logical relationships.

Thinking about database design, it can be useful to normalize the data. This helps to break data into naturally associated groupings that can be stored collectively in segments in a hierarchical database. It also breaks individual data elements into groups based on the processing functions, and groups data based on inherent relationships between data elements.

The database types that support logical relationships are HISAM, HDAM, PHDAM, HIDAM, and PHIDAM. There are no logical relationships with Fast Path DEDB or MSDB.

Logical relationships are formed by creating special segments that use pointers to access data in other segments. The path between the logical child and the segment to which it points is called a logical relationship.

There are three types of logical relationships:
  • Unidirectional
  • Bidirectional physically paired
  • Bidirectional virtually paired.
A unidirectional relationship links two segment types (logical child and its logical parent) in one direction. A one-way path is established using a pointer in the logical child.

A bidirectional physically paired logical relationship links two segment types, a logical child and its logical parent, in two directions. A two-way path is established using pointers in the logical child segments.

A bidirectional virtually paired logical relationship is like a bidirectional physically paired relationship. It:
  • Links two segment types, a logical child and its logical parent, in two directions, establishing a two-way path
  • Can be established between two segment types in the same or different databases.
A logical child segment exists in only one database. When going from database A to database B, IMS uses the pointer in the logical child segment. When going from database B to database A, IMS uses the pointer in the logical parent, as well as the pointer in the logical child segment.

To establish a logical relationship, three segment types are always defined – the physical parent, the logical parent, and the logical child.

Aurora explained that the pointers used in logical relationships fall into two categories – direct and symbolic. We can implement logical relationships using both types, however, a direct pointer is a true pointer.

There are four types of pointers:
  • Logical parent (LP)
  • Logical Child (LC)
  • Logical Twin (LT)
  • Physical Parent (PP).
LP pointers point from the logical child to the logical parent. An LP pointer is in the prefix of the logical child and consists of the 4-byte direct address of the logical parent.

Logical child pointers are used only for logical relationships that use virtual pairing. With virtual pairing, only one logical child exists on DASD, and it contains a pointer to a logical parent. The logical parent points to the logical child segment.

Two types of logical child pointers can be used:
  • Logical Child First (LCF)
  • A combination of LCF and Logical Child Last (LCL) pointers.
Because LCF and LCL pointers are direct pointers, the segment they are pointing to must be in an HD database. The logical parent (the segment pointing to the logical child) must be in a HISAM or HD database. If the parent is in a HISAM database, the logical child must use a symbolic pointer to point to the parent.

A symbolic LP pointer consists of the Logical Parent’s Concatenated Key (LPCK). It can be used to point into a HISAM or HD database.

Physical Parent (PP) pointers point from a segment to its physical parent. IMS generates PP pointers automatically for HD databases involved in logical relationships.

Logical Child Pointers are used only in logical relationships with virtual pairing. When virtual pairing is used, there is only one logical child on DASD, called the real logical child. This logical child has an LP pointer. The LP pointer can be symbolic or direct.

Two types of logical child pointers can be used:
  • Logical Child First (LCF) pointers
  • A combination of logical child first (LCF) and Logical Child Last (LCL) pointers.
CA’s Aurora Dell’Anno went on to inform the user group that logical twins are multiple occurrences of logical child segments that point to the same occurrence of a logical parent segment. Two types of logical twin pointers can be used:
  • Logical Twin Forward (LTF)
  • A combination of LTF and Logical Twin Backward (LTB).
HALDBs (PHDAM, PHIDAM, and PSINDEX databases) use direct and indirect pointers for pointing from one database record to another database record. The use of indirect pointers prevents the problem of misdirected pointers that would otherwise occur when a database is reorganized. The repository for the indirect pointers is the indirect list data set. The misdirected pointers after reorganization are self-healing using indirect pointers.

The relationship between physical parent and logical child in a physical database and the LP pointer in each logical child creates a physical parent to logical parent path. For a physical parent to logical parent path, the logical parent is the destination parent in the concatenated segment.

For a logical parent to physical parent path, the physical parent is the destination parent in the concatenated segment.

When use of a physical parent to logical parent path is defined, the physical parent is the parent of the concatenated segment type. When an application program retrieves an occurrence of the concatenated segment type from a physical parent, the logical child and its logical parent are concatenated and presented to the application program as one segment. When use of a logical parent to physical parent path is defined, the logical parent is the parent of the concatenated segment type. When an application program retrieves an occurrence of the concatenated segment type from a logical parent, an occurrence of the logical child and its physical parent are concatenated and presented to the application program as one segment.

In both cases, the physical parent or logical parent segment included in the concatenated segment is called the destination parent.

There must be one physical DBD for each of the databases in a logical relationship. All statements are coded with the same format used when a logical relationship is not defined, except for the SEGM and LCHILD statements. The SEGM statement includes the new types of pointers. The LCHILD statement is added to define the logical relationship between the two segment types. The pointers for use with HD databases must be explicit in the PTR= parameter.

When a logical relationship is used, you must define the physical databases involved in the relationship to IMS. Also, you often need to define the logical structure of IMS since this is the structure the application program perceives.

With HALDB databases, bidirectional logical relationships must be implemented with physical pairing. When loading a new partitioned database with logical relationships, the logical child segments cannot be loaded as part of the load step. IMS adds logical children by normal update processing after the database has been loaded. HALDBs use an Indirect List Data Set (ILDS) to maintain logical relationship pointers when logically related databases are reorganized.

Aurora recommended a White Paper called Converting Logical Relationships to Support Partitioned Databases written by William N Keene – who some of you will have heard speak at user group meetings and who is a regular attendee. The White Paper provides guidance and examples for converting a set of logically related databases from bidirectional, virtual pairing using direct pointers to bidirectional, physical pairing using symbolic pointers.

The presentation contained a lot more detailed information, including some thoughts on performance. Aurora concluded by stating that logical relationships resolve conflicts in the way application programs need to view segments in the database. With logical relationships, application programs can access segment types in an order other than the one defined by the hierarchy, and have a data structure containing segments from more than one physical database.

Membership of the Virtual IMS Connection is free and open to anyone interested in IMS. Webinars are presented free to members. You can join by going to http://www.virtualims.com/register.aspx and completing the form.

Sunday 7 March 2010

What's going on - on my Open Systems Adapter?

An Open Systems Adapter (OSA) is a network controller between mainframes and other computing platforms. An OSA supports Ethernet, Token Ring, and FDDI (Fibre Distributed Data Interface) connectivity, and can offer data throughput speeds of up to 10 gigabits per second. An OSA provides a way of attaching a LAN (Local Area Network) to a mainframe, or looking at it the other way round, it provides a way of connecting a mainframe to a LAN! Either way, it makes it a very crucial piece of kit.

At most installations, life isn't quite so simplistic. For example, a single OSA can be shared across several LPARs and/or IP stacks. It can handle multiple channels and multiple types and instances of ports. And the consequence of this is its ability to support a very large number of users.

You'd think this kind of complexity would make the OSA a device that is monitored by every site that's got one. After all, what would happen were it to suddenly stop working? But the complexity doesn't stop there. An OSA exists as a z/OS device, a VTAM device, an IP interface, and an SNMP device. And it's this level of complexity that makes the collection of data so tricky. There's not even some low-level tool providing a single query that will reveal how an OSA is performing.

In truth, OSAs are resilient, managing to recover and log any non-fatal errors encountered in an internal Management Information Base (MIB). The MIB is a database containing information about the current OSA set-up and state information. Getting information out of the MIB is only possible with low-level tools and a good knowledge of the associated MIB structures. An OSA MIB may well contain many thousands of separate fields identified by Object Id and an Index relating them to a specific LPAR, channel, or port. Additional information about an OSA can be found in z/OS device tables, IP stacks, and VTAM control blocks.

Perhaps surprisingly, many sites feel that an IP monitor is probably all that's needed to keep an eye on their OSA. However, a new alternative is ZEN OSA Monitor (ZOM) from William Data Systems (www.willdata.com). ZOM provides users with the ability to get a health check of their OSA, and it also provides a soft reset feature. According to the WDS Web site: "ZOM dynamically detects any status changes or increases in error counts, with all such changes summarized in easy-to-read panels in ZEN, with full alerting and reporting capabilities. A simple but effective system of traffic lights highlight any changes to reference values, ensuring these critical devices are maintained at optimal performance levels."

Some readers may be familiar William Data Systems ZEN EE (Enterprise Extender) Monitor, codenamed Ferret.

With ZOM, the OSA dashboard provides an at-a-glance list of a user's monitored OSAs with an indication of current usage and status. Several fields contain drill-down links to further, more detailed displays.

With OSA devices playing such a critical role in the communication between mainframes and the rest of the computing environment, it would make sense to utilize a product that's able to access important performance data and display it in an easily digestible format. Anyway, they're not paying me to sell their product for them, but I thought readers of this blog would find the existence of this software well worth knowing about.