Sunday, 13 April 2014

Name the 12 Apostles - a quiz

Because we’re coming up to Easter, I thought I’d do something different this week!

Can you name the 12 Apostles?

Well, obviously, there’s Matthew, Mark, Luke, and John, who have Gospels named after them – that’s four. Whoops, sorry, you can’t include Mark and Luke – they weren’t Apostles.

Well, from the Easter story we’re all familiar with Peter and Judas Iscariot. So that’s four we can name, we’re a third of the way there.

So who does that leave, mmmh!

There was Simon. Oh no not him because he’s already in our list – he’s also called Peter. So still eight to go.

There was Doubting Thomas – that’s five.

But who were the others. Well, there’s James who was John’s brother, and there was Andrew who was Peter’s brother – that’s seven. Just five to go.

So who are those last five? Bonus marks for anyone who remembers Bartholomew, James (the less), Philip, Simon the Canaanite, and Thaddeus.

So hang on, what about all those people who keep saying Jude – the patron saint of desperate causes. Yes he was an Apostle, but, apparently, he is also known as Thaddeus – mainly because early writers didn’t want him being confused with Judas Iscariot (hence the alternative name).

Which brings me nicely back to dealing with Judas Iscariot. Obviously he was an Apostle – one of the 12 – but you can’t imagine early Christians being too keen on including him in a list of revered Apostles! So who became the new twelfth man (if you’ll pardon a sort of cricketing metaphor)? In Acts it is said that following the death of Judas a new twelfth Apostle was appointed. He was called Matthias. He usually appears in lists of the twelve Apostles rather than the disgraced Judas Iscariot. However, some argue that only people actually sent out by Jesus could truly be called Apostles.

Troublingly, The Gospel of John also mentions an Apostle called Nathaniel (see John 1:45-51 and 21:2). Most authorities assume this is an alternative name for Bartholomew – so don’t worry about that one.

St Barnabas (who, confusingly, was originally called Joseph) is referred to as an Apostle in Acts 14:14. His claim to fame was introducing Paul (of road to Damascus fame) to the disciples in Jerusalem. And that’s the lot!

And for lots of bonus marks, can you name the thirteenth Apostle? Of course, there is more than one answer to this question.

Paul (of Tarsus) described himself as an Apostle following his Damascene conversion.

The Roman Emperor Constantine was responsible for making Christianity the official religion in Rome in the 4th century. He is often referred to as the thirteenth Apostle.

Plus, there’s also a long list of people who have brought Christianity to a some particular part of the world, who are referred to as the Apostle of somewhere or Apostle to somewhere else (for example St Augustine, the Apostle to England, or St Patrick, the Apostle to Ireland).

So, a much harder quiz than you might have thought. How many did you know?

If you celebrate it, have a good Easter next week. It’s back to mainframes in two weeks’ time.

Sunday, 6 April 2014

Happy birthday mainframe

7 April marks the 50th anniversary of the mainframe. It was on that day in 1964 that the System/360 was announced and the modern mainframe was born. IBM’s Big Iron, as it came to be called, took a big step ahead of the rest of the BUNCH (Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell). The big leap of imagination was to have software that was architecturally compatible across the entire System/360 line.

It was called System/360 to indicate that this new systems would handle every need of every user in the business and scientific worlds because it covered all 360 degrees of the compass. That was a triumph for the marketing team because it would have otherwise been called the rather dull System 500. System/360 could emulate IBM’s older 1401 machines, which encouraged customers to upgrade. Famous names among its designers are Gene Amdahl, Bob Evans, Fred Brooks, and Gerrit Blaauw. Gene Amdahl later created a plug-compatible mainframe manufacturing company – Amdahl.

IBM received plenty of orders and the first mainframe was delivered to Globe Exploration Co. in April 1965. Launching and producing the System/360 cost more than $5 billion, making it the largest privately-financed commercial project up to that time. It was a risky enterprise, but one that worked. From 1965 to 1970, IBM’s revenues went up from $3.6 billion to $7.5 billion; and the number of IBM computer systems installed anywhere tripled from 11,000 to 35,000.

The Model 145 was the first IBM computer to have its main memory made entirely of monolithic circuits. It used silicon memory chips, rather than the older magnetic core technology.

In 1970, the System/370 was introduced. The marketing said that the System/360 was for the 1960s; for the 1970s you needed a System/370. All thoughts of compass points had gone by then. IBM’s revenues went up to $75 billion and employee numbers grew from 120,000 to 269,000, and, at times, customers had a two-year wait to get their hands on a new mainframe.

1979 saw the introduction of the 4341, which was 26 times faster than the System/360 Model 30. The 1980s didn’t have a System/380. But in 1990, the System/390 Model 190 was introduced. This was 353 times faster than the System/360 Model 30.

1985 saw the introduction of the Enterprise System/3090, which had over one-million-bit memory chips and came with Thermal Conduction Modules to speed chip-to-chip communication times. Some machines had a Vector Facility, which made them faster. It replaced the ES/3080

The 1990s weren’t a good time for mainframes. For example, in March 1991, Stewart Alsop stated: “I predict that the last mainframe will be unplugged on March 15, 1996.” Not the most successful prediction, but definitely catching the zeitgeist of the time. It was the decade of the System/390 (back to the old style naming convention). We saw the introduction of high-speed fibre optic mainframe channel architecture Enterprise System Connection (ESCON).

The System/360 gave us 24-bit addressing (32-bit architecture) and virtual storage. The System/370 gave us multi-processor support and then extended storage 24-bit/31-bit addressing. With System/390 we got the OS/390 operating system. As we move into the 2000s, we got zSeries (zArchitecture) and z operating systems giving us 24, 31, and 64-bit addressing. In 2003, the z990 was described as, “the world's most sophisticated server”. In 2005 we got the zIIP specialty engine. In 2008 it was the z10 EC with high capacity/performance (quad core CPU chip). In 2010 the z196 (zEnterprise) had 96-way core design and distributed systems integration (zBX). In 2012, the zEC12 was described as an integrated platform for cloud computing, with integrated OLTP and data warehousing. In 2000 IBM said it would support Linux on the mainframe, and, by 2009, 70 of IBM’s top 100 mainframe customers were estimated to be running Linux. A zEnterprise mainframe can run 100,000 virtual Linux servers. Modern mainframes run Java and C++.  And, the latest mainframe is compatible with the earliest System/360, which means that working code written in 1964 will still run on the latest z12BC.
 

In terms of operating systems, OS/360 was replaced by MVT, which became OS/VS2 SVS, and then OS/VS2 MVS. That became MVS/SE, which became MVS/SP, which became MVS/XA and then MVS/ESA before becoming OS/390 and finally z/OS.

And what does the future look like – with people migrating to other platforms and the technically-qualified mainframers beginning to look quite long in the tooth? The answer is rosey! Mainframe applications are moving from their traditional green screens to displays that look like anything you’d find on a Windows or Linux platform. They can provide cloud-in-a-box capability. They can integrate with other platforms. They can derive information from Big Data and the Internet of Things. Their biggest problem is, perhaps, decision makers at organizations don’t value them enough. Surveys that identify the cost benefits of running huge numbers of IMS or CICS transactions on a mainframe compared to other platforms are generally ignored. Many people think of them as “your dad’s technology”, and high-profile organizations like NASA are unplugging them.

So, although they are often misperceived as dinosaurs, they are in fact more like quick-witted and agile mammals. They provide bang up-to-date technology at a price that is value for money. I predict that we will be celebrating the mainframe’s 60th birthday and its 70th, though we may not be able to imagine how compact it will be by then and what new capabilities it will have.

Happy birthday mainframe.

Sunday, 30 March 2014

Tell me about this Yammer thing

I’ve been to a few companies recently that have been using Yammer as a business tool. If you’ve got offices that are spread out, or if your workforce aren’t usually in the office, then it provides an easy way for people to be able to share things – like comments, documents, or images. And you can form groups so discussions, that are only relevant to a small group of people, stay within that small group or team.

Yammer started life in 2008 and was bought by Microsoft in 2012. It’s described as an enterprise social network. That means it’s not a public social network like Facebook, it’s for internal communication between members of an organization or group.

It’s free, it’s very easy to use (if you’ve ever used Facebook), and it provides a private and secure place for discussion. The simplest way to use Yammer is from your browser (Explorer, Firefox, Chrome, etc), and you can download the app for your smartphone or tablet.

It’s easy to set up and use, but I thought I’d put together some instructions for new users, so they know how to get on and start using it.

To sign up, go to www.yammer.com. You’ll see a large box in the middle of the page:
 



Type in your company e-mail address – you can’t use your personal e-mail address because it won’t work.

Complete your Yammer profile and add a photo. New people in your organization may not be familiar with who you are and your particular skill set.

You can join groups and follow topics that are relevant to you. If Yammer gets very busy with people posting, you won’t want to be informed every time there’s a new post. So, click on the three dots in the upper right-hand corner. In the drop-down menu, select ‘Edit Profile’. Then select ‘Notifications’ from the list on the left, and then choose how often you want to receive notifications. ‘Save’ your choice. There’s a ‘Back Home’ box top-left to get back.

You can also follow other people – that way you get to see what they’re posting.

When you come to use Yammer on subsequent occasions, you simply click on ‘Log In’ on the right of the top menu bar.



Now you can start to use Yammer.

You can post messages – these can be comments, questions, updates. You can post links to articles or blogs elsewhere on the Web.

You can follow people, which means that you want to view messages from them in ‘My Feed’. It’s not like a friend request. They don’t have to agree. They don’t have to follow you back.

You can read what other people are posting and get a feel of what’s going on across the organization.

You can ‘Like’ other people’s posts.

You can find out more about people in your organization by reading their profile.

You could start your own group or join existing groups.

You can upload pictures. You can organize events/meetings. You can survey what people think about things

You can use topics so that all the posts are around a specific topic. To add a topic to a post, click ‘add topic’ while writing the message or you can use a hashtag. You can also add topics to a published message by clicking ‘more’. Hashtags (#) are used to identify what posts are about and to make finding information easier.

You can search for information in the search box near the top of the page. This will find whether anyone else has posted about a particular topic.

And you can send a direct message in three ways. Use the @ sign followed by the user’s names. As you start to type the name, a drop-down menu will give you suggestions. You can send a private message:
  • Click ‘Inbox’ in the left column.
  • Click ‘Create Message’ on the right sidebar.
  • Select ‘Send Private Message’.
  • In the ‘Add Participants’ field, start to type the person’s user name. A drop-down list of matching user names appears.
  • Select the name of the name of the person you want to send the message to.
  • Write your message, and then ‘Send’.
And you can send a message through ‘Online Now’:
  • Click ‘Online Now’ in the bottom-right corner.
  • Start writing the person’s name. A drop-down list of matching user names appears.
  • Use the up and down arrows, and ‘Enter’ to select a name. A message box opens.
  • Write your message, and then ‘Send’.
Recipients are notified that they have a message.

Unbelievably, Yammer refers to all communications inside Yammer as “Yams”. Yams are sorted into various feeds. A feed, if you’re new to social media, is a way of keeping you up-to-date with content that other people are posting.

I think many organizations would benefit from an internal social media tool. There are alternatives to Yammer available, but I think it can be very useful within an organization to help with communication. And it can be fun!

Sunday, 23 March 2014

What is Software Defined Anything?

If you’ve sat through a training seminar recently, you’ve probably seen a slide talking about software-defined anything or software-defined everything. Or you may have seen the acronym SDx and wondered what it is and where it’s come from. So let’s have a look at what they’re talking about.

Basically, what we’re looking at is using software to control different kinds of hardware, and then to make that software able to control multiple-component hardware systems. With the growth of the Internet of Things (IoT), it makes sense to start thinking about being able to create rules that are implemented in software that can be used to control a myriad of different types of devices.

At the moment there are a number areas using software-defined technology. For example there’s Software-Defined Storage (SDS), which seems to apply to all sorts of storage software, particularly virtualization software. Different vendors use the term loosely for different things. Software-Defined Networking (SDN) is where network devices are programmable and so networks themselves are more dynamic. Again, it’s a term that’s used by different vendors for different things.

Software-Defined Storage Networks (SDSN) is an attempt to virtualize storage networks by separating the actual physical network from its controlling software. A Software-Defined Hypervisor (SDH) seems to refer to virtualizing the hypervisor layer and separating it from its management console. And finally, there’s Software-Defined Infrastructure (SDI) aka Software-Defined Data Centre (SDDC), which is an aspirational concept where data centre services are controlled by policy-driven software.

Two things probably leap to mind about now. Firstly, this seems a lot like marketecture! We’ve seen this before, where vendors are really selling us an idea of something rather than it being a tangible reality. We are very much in the early days of this sort of thing. The second thing is that this is not directly linked to mainframes. This is VMware’s ideas – as well as a huge number of other companies.

Having said that, of course, the newer hybrid mainframes from IBM will be able to make use of this technology as it becomes available in reality. Also, Gartner reckons that SDx is one of the major disruptive technologies to watch. It makes it easier to scale up existing architecture and even try out different architectures. Also it makes it possible to tune networks, matching network performance to workloads. And, of course the main selling points are flexibility, agility, security, and price.

IBM’s Smarter Computing blog has an interesting blog by Shamin Hossain called “Software defined everything: When a data center becomes soft”, which can be found at http://www.smartercomputingblog.com/software-defined-environment-2/software-defined-everything/.

Clearly the prefix ‘software-defined’ is one that we’re going to hear a lot more about this year.

Sunday, 16 March 2014

Happy birthday WWW

The World Wide Web celebrated 25 years on 12 March – although that’s really 25 years since conception rather than since birth. It was on the 12 March 1989 that Sir Tim Berners-Lee first put forward his proposal for what became the World-Wide Web.

The 34-year-old software engineer at CERN physics lab in in Geneva wrote a paper called, “Information Management: A Proposal”. The driving force was the need to not only communicate with colleagues, but also keep in contact with the many scientists who had worked at CERN and were now working elsewhere.

It soon became clear that the idea could be extended beyond CERN and in 1990, working with Robert Cailliau, proposed to use hypertext. The first Web site was created that year. Their thinking at the time was that there would be a web (the WorldWideWeb) of hypertext documents, and people could view them using a browser.

Steve Jobs is strangely linked to this story, and that’s because the first server was a NeXT Computer. These workstations were built by Jobs and his team, which included other ex-Macintosh staff. There was actually a note on the computer telling people not to turn it off. By late 1991, people outside CERN could access the Web as a service available on the Internet. By 1992, there was a server outside of Europe. It was set up in Palo Alto at the Stanford Linear Accelerator Center.

What Berners-Lee did that was special was to combine hypertext and the Internet. He also developed three technologies that we take for granted nowadays. They are: Hypertext Transfer Protocol (HTTP); Hypertext Mark-up Language (HTML); and unique Web addresses – URLs (Uniform Resource Locators).

In 1993 the Web browser called Mosaic was released. This had a graphical user interface and made browsing the Web easy and quick. The World Wide Web Consortium (W3C) was founded in 1994. The rest, as they say, is history.

Using the 25th birthday as a springboard, Sir Tim Berners-Lee has called for a bill of rights to protect freedom of speech on the Internet and users’ rights following leaks about government surveillance of online activity. Berners-Lee has said that there is a need for a charter like the Magna Carta to help guarantee fundamental principles.

Edward Snowden’s leaking of so many documents revealing (or confirming) that governments all over the world are monitoring Internet activity (as well phones) has brought Web privacy to the attention of the public. It seems that the NSA has been collecting personal data about Google, Facebook, and Skype users.

And now we can use the Web on our tablets and smartphones. We can buy just about anything from Amazon. We can look-up everything on Google, find out the details on Wikipedia, keep up with our friends on Facebook, watch videos on YouTube, shop, bank, and choose the best deals for insurance. We can send e-mails, and tweet about what we’re doing and who we’re with. We can also apply for a job, listen to music, read a book, and browse photos. In fact, we can do just about anything it’s possible to do.

It’s true that there is a dark side to the Web. People can find out a lot of information about you from a quick online search, and organizations, many of which are part of the government, can find out even more about your browsing habits etc. But on the whole, for most people, getting online is a natural part of the day – and it’s a very enjoyable part of their day. So well done TimBL, great idea. And happy birthday to the World Wide Web.

Sunday, 9 March 2014

REXX - the wonder program!

REXX (REXX or Rexx) has been around since the early 80s. I first came across it in the mid-80s, when sites were beginning to use it with VM and CMS as a powerful replacement for EXECs. I thought it would be interesting to take a look at this venerable, but still powerful tool in the system programmer’s toolkit.

Amazingly, REXX wasn’t written as an IBM project, it was written by IBMer Mike Cowlishaw in his own time in Assembler. It was designed as a macro or scripting language and was based on PL/I. It first saw the light of day at SHARE in 1981, and became a full IBM product in 1982.

IBM has implemented REXX on VM, MVS, VSE, CICS, AIX, and AS/400s, and on subsequent products. There are even versions available for Java, Windows, and Linux. A compiled version for CMS became available in 1987 and was written by Lundin and Woodruff.

REXX was so popular that versions were developed for other platforms, including Unix, Solaris, Apple Mac OS X, OpenVMS, and lots of others. There’s also NetRexx, which compiles to Java byte code, and ObjectREXX, which is an object-oriented version of REXX.

In 2005, Open Object Rexx (ooRexx) was announced. It has a Windows Script Host (WSH) scripting engine. So code can be written for Windows. It also has a command line Rexx interpreter.

Those of you who use the ZEN family of products from William Data Systems will be interested to know that ZEN Rexx, formerly available with ZEN’s AUTOMATION component, has now been made available to all WDS customers through the base ZEN component.

ZEN’s Rexx support comprises: a ZEN Rexx Interface; a ZEN Rexx Function Pack; and a command interface. The ZEN Rexx Interface provides support for running Rexx programs under ZEN in a similar way to running Rexx programs under TSO or Netview. You can also run ZEN Rexx programs using a Modify (F) command from the z/OS console. Under ZEN, Rexx programs can be run from the ZEN Profile at initialization time, from the Command Facility and System Log panels, or from a user-defined ZEN menu item.

The ZEN Rexx Function Pack provides extensions to standard REXX through which users can communicate directly with ZEN. The command interface enables users to issue commands from their ZEN Rexx programs and get any responses back. This means that commands provided by their WDS ZEN components can be issued from a ZEN Rexx program, as well as any z/OS, VTAM, or other product command.

REXX may have had humble beginnings, with Mike Cowlishaw working in his own time, but it has gone on to conquer the world (metaphorically) and is now in regular use, in different incarnations, on just about every platform imaginable, including your smartphone and tablet.

Saturday, 1 March 2014

Big Data 2.0

We were only just beginning to get our heads around Hadoop and Big Data in general when we find everyone is starting to talk about Big Data 2.0 – and it’s bigger, faster, and cleverer!

Hadoop, as I’m sure you know, is an open source project, and it’s available from companies like IBM, Hortonworks, Cloudera, and MapR. It provides a storage and retrieval method (HDFS – Hadoop Distributed File System) that can knock the socks off older, more expensive storage options on databases using SAN or NAS. It also means that more data can be stored. And that means not just human-keyed data, but data from the information of things (point of sales machines, sensors, cameras, etc) as well as social media. It’s an OCD sufferer’s dream come true. No need to delete (throw away) anything. But with all the data, it becomes important to find some way to ‘mine’ it – to derive information from the data that can be commercially useful. And that’s what’s happening, deeper and richer sets of results are being derived from the data that are beneficial to organizations.

With Version 2 of Hadoop, everything is faster. Data is processed at amazing speeds in-memory. The analysis is taking place at speed on terabytes of data. It also allows decisions to be made at speeds unavailable to humans. Research shows that algorithms with as many six variables out-perform human experts in most situations. This was tested on experts predicting the price of wine in future years and stock marketeers. So now, Big Data 2.0 means better decisions can be made at incredible speed.

It’s also possible for machines to learn using these techniques – such as the Google classic of having software that can identify the presence of a cat in video footage and no-one being quite sure how it is doing it.

For mainframe sites, Hadoop isn’t just some distant dream. You don’t need a room full of Linux servers to make it work – in fact that’s the clue to the solution. Much of this works very nicely on Linux on System z (or zLinux as many people still think of it). And once the data is on a mainframe, it becomes very easy to copy parts of it to a z/OS partition for more work to be done on the data. Cognos BI runs on the zLinux partition, so the first level of information extraction can be performed using that Business Intelligence tool. Software vendors are coming to market with products that run on the mainframe. BMC has extended its Control-M automated mainframe job scheduler with Control-M for Hadoop. Syncsort has Hadoop Connectivity. Compuware has extended its Application Performance Management (APM) software with Compuware APM for Big Data. And Informatica PowerExchange for Hadoop provides connectivity to the Hadoop Distributed File System (HDFS).

So what’s it like on the ground and away from the PowerPoint slides? At the moment, my experience is that really big companies – Google, Amazon, Facebook, and similar are pushing the envelope with Big Data. But it seems that many large organizations aren’t strongly embracing the new technology. Do banks, insurance companies, and airlines – the main users of mainframes – see a need for Big Data? Seemingly not – or not yet. Perhaps they are waiting for money to be spent and mistakes to be made before they adopt best practice and reap the benefits. Perhaps they are waiting for Big Data V3?

Big Data is definitely here to stay and those companies that could benefit from its adoption will gain a huge commercial advantage when they do.