If you do celebrate it, merry Christmas.
And if you don't, have a great time anyway.
Sunday, 21 December 2014
Sunday, 14 December 2014
Sunday, 7 December 2014
Sunday, 30 November 2014
Sunday, 16 November 2014
Sunday, 9 November 2014
Sunday, 2 November 2014
Sunday, 26 October 2014
Sunday, 19 October 2014
Sunday, 12 October 2014
Sunday, 5 October 2014
Sunday, 28 September 2014
Sunday, 21 September 2014
Sunday, 14 September 2014
Sunday, 7 September 2014
Sunday, 31 August 2014
Making a point
All too often mainframes are seen as the expensive piece of kit that sits quietly and works. No-one seems to get too excited about making any changes to it, but they do get excited about cloud technology, mobile apps, and whatever else they see as interesting at the moment. That sometimes leaves the mainframe team unable to argue a case for change. But suppose there was a way of, not necessarily winning every argument, but, certainly, of being able to put up a good fight in the cut and thrust of a finance meeting?
There is a book by NLP (Neuro-Linguistic Programming) expert Robert Dilts called “Sleights of Mouth” that looks at the way arguments are constructed and ways of rebutting each type of counter argument. The theory behind is, basically, that our thoughts and actions are inside, what’s called, a frame of reference, and often we’re not aware of this. These frames of reference can lock us into quite restricted thinking, and so it seems that we have only very few choices. So, a ‘reframe’ gives us a different perspective on a problem, and so opens the door to other potential solutions.
The book also looks at ‘beliefs’, which in this case define the relationship between values and their causes, indicators, and consequences. Beliefs are typically expressed in the form of a ‘cause-effect’ or a ‘complex equivalence’. Cause-effect assumes that one thing causes, or is caused by, another without any hard evidence. You hear people use words like: makes, because, if...then, as...then, then, since, so. Complex equivalence is where complex situations, ideas, objects, or their meanings are equated as synonymous. You hear people say things like: that means, that just means, it must be that, what else could it mean? For example: “The boss has his door closed. That means he’s planning to get rid of the mainframe”.
In his book, Robert Dilts identified 14 different Sleight of Mouth patterns. You don’t have to use them all, but it can be useful to be aware of what the 14 techniques are for when you do need to use them. And, of course, each of the techniques has a name.
So, that was quite a long introduction, let’s have a look at the Sleight of Mouth patterns, let us assume your CFO says, “We need to get rid of the mainframe because the company is losing money” Notice the A because B pattern – so in our response, we can focus on either A or B or both. I will give only one example, the fact is that many others are possible.
1 Intention: what could be the positive intention? In this case saving money (it seems).
Response: I very much admire and support your desire save money, but...
2 Redefine: how can you lessen the impact of these negatives? Use words that are similar but may infer something different.
Response: I agree we need to look for new ways to save money.
3 Consequences: focus a consequence that leads to challenging the belief.
Response: Taking a look at our corporate spending is definitely the first step.
4 Chunk down: look at a specific element that challenges the belief.
Response: I am not sure how losing our company’s core IT platform will save us money.
5 Chunk up: generalize in order to change the relationship defined by the belief.
Response: Any change to our IT structure can have unforeseen consequences.
6 Counter example: find an exception that challenges the generalization defined by the belief.
Response: It’s hard to see cost savings from losing the mainframe when the work that generates the bulk of our income runs on it.
7 Analogy: use an analogy or metaphor that challenges the generalization defined by the belief.
Response: Not all change is good – ask the climate.
8 Apply to self: use key aspects of the belief to challenge the belief.
Response: Might it not save money to look at some other areas of high spending?
9 Another outcome: propose a different outcome that challenges the relevancy of the belief.
Response: Maybe the problem is not so much whether we get rid of the mainframe, but whether we are doing the right things to cut costs.
10 Hierarchy of criteria: Re-assess the belief based on a more important criterion.
Response: staying in business is more important than our IT policy.
11 Change frame size: re-evaluate the implication of the belief in the context of a longer (or shorter) time frame, a larger number of people (or from an individual point of view) or a bigger or smaller perspective.
Response: Successful organizations have been cutting cost for centuries. Those that stayed in business made the best decisions.
12 Meta frame: challenge the basis for the belief.
Response: Your belief about getting rid of the mainframe assumes that you know the ‘right’ IT infrastructure, and those who do not share your view have negative intentions.
13 Model of the world: look at the belief from a different perspective (model of the world).
Response: Do you know that the majority of Fortune500 companies have a mainframe?
14 Reality strategy: re-assess the belief based on the fact that beliefs are based on specific perceptions.
Response: What particular aspects of using a mainframe do you feel fearful about?
You can find out far more about these in Robert Dilts’ book, but it’s interesting to know that these techniques are out there and available.
There is a book by NLP (Neuro-Linguistic Programming) expert Robert Dilts called “Sleights of Mouth” that looks at the way arguments are constructed and ways of rebutting each type of counter argument. The theory behind is, basically, that our thoughts and actions are inside, what’s called, a frame of reference, and often we’re not aware of this. These frames of reference can lock us into quite restricted thinking, and so it seems that we have only very few choices. So, a ‘reframe’ gives us a different perspective on a problem, and so opens the door to other potential solutions.
The book also looks at ‘beliefs’, which in this case define the relationship between values and their causes, indicators, and consequences. Beliefs are typically expressed in the form of a ‘cause-effect’ or a ‘complex equivalence’. Cause-effect assumes that one thing causes, or is caused by, another without any hard evidence. You hear people use words like: makes, because, if...then, as...then, then, since, so. Complex equivalence is where complex situations, ideas, objects, or their meanings are equated as synonymous. You hear people say things like: that means, that just means, it must be that, what else could it mean? For example: “The boss has his door closed. That means he’s planning to get rid of the mainframe”.
In his book, Robert Dilts identified 14 different Sleight of Mouth patterns. You don’t have to use them all, but it can be useful to be aware of what the 14 techniques are for when you do need to use them. And, of course, each of the techniques has a name.
So, that was quite a long introduction, let’s have a look at the Sleight of Mouth patterns, let us assume your CFO says, “We need to get rid of the mainframe because the company is losing money” Notice the A because B pattern – so in our response, we can focus on either A or B or both. I will give only one example, the fact is that many others are possible.
1 Intention: what could be the positive intention? In this case saving money (it seems).
Response: I very much admire and support your desire save money, but...
2 Redefine: how can you lessen the impact of these negatives? Use words that are similar but may infer something different.
Response: I agree we need to look for new ways to save money.
3 Consequences: focus a consequence that leads to challenging the belief.
Response: Taking a look at our corporate spending is definitely the first step.
4 Chunk down: look at a specific element that challenges the belief.
Response: I am not sure how losing our company’s core IT platform will save us money.
5 Chunk up: generalize in order to change the relationship defined by the belief.
Response: Any change to our IT structure can have unforeseen consequences.
6 Counter example: find an exception that challenges the generalization defined by the belief.
Response: It’s hard to see cost savings from losing the mainframe when the work that generates the bulk of our income runs on it.
7 Analogy: use an analogy or metaphor that challenges the generalization defined by the belief.
Response: Not all change is good – ask the climate.
8 Apply to self: use key aspects of the belief to challenge the belief.
Response: Might it not save money to look at some other areas of high spending?
9 Another outcome: propose a different outcome that challenges the relevancy of the belief.
Response: Maybe the problem is not so much whether we get rid of the mainframe, but whether we are doing the right things to cut costs.
10 Hierarchy of criteria: Re-assess the belief based on a more important criterion.
Response: staying in business is more important than our IT policy.
11 Change frame size: re-evaluate the implication of the belief in the context of a longer (or shorter) time frame, a larger number of people (or from an individual point of view) or a bigger or smaller perspective.
Response: Successful organizations have been cutting cost for centuries. Those that stayed in business made the best decisions.
12 Meta frame: challenge the basis for the belief.
Response: Your belief about getting rid of the mainframe assumes that you know the ‘right’ IT infrastructure, and those who do not share your view have negative intentions.
13 Model of the world: look at the belief from a different perspective (model of the world).
Response: Do you know that the majority of Fortune500 companies have a mainframe?
14 Reality strategy: re-assess the belief based on the fact that beliefs are based on specific perceptions.
Response: What particular aspects of using a mainframe do you feel fearful about?
You can find out far more about these in Robert Dilts’ book, but it’s interesting to know that these techniques are out there and available.
Sunday, 24 August 2014
Apps to make you feel better
We might spend all day in a mainframe environment, but, when we leave it, the closest computing device to hand is probably our mobile phone. And if you’re waiting for someone or something, you probably get out your phone and check your e-mail and texts, and then what do you do? You could play a game, or you, perhaps, could try some self-improvement apps. I thought that, this week, I might take a look at some of the apps available for hypnotherapy.
Now, if you're hypnotized, you're meant to be in a state of heightened suggestibility and responsiveness. So rather than getting you chicken dancing or eating a raw onion, as you might see on a stage show, hypnotherapy can make positive changes, creating new responses, thoughts, attitudes, behaviours, or feelings. So, while you’re waiting for a burger or your friend to turn up, doesn’t it make sense to use an app on your phone to make you a better person?
A lot of people worry about giving presentations and generally feel that they could do with a bit more self-confidence. The good news is that there are hypnosis apps for that. There’s Total Confidence & Success (Darren Marks hypnotherapy), Confidence Now, and Confident Public Speaking Now (Pocket Hypnotherapy range), Free Hypnosis and Self Esteem (Erick Brown Hypnosis), and Automatic Motivation Hypnosis (Mastermind App).
Sleep is another area where people don’t feel they’re getting the right number of hours or the right quality. And here, again, there are apps available. There’s Sleep Deeply (Darren Marks hypnotherapy), Sleep Soundly Hypnosis (Kym Tolson and Hani Al-Qasem), Relax and Sleep Well (Diviniti Publishing Ltd), Sleep Now (Pocket Hypnotherapy range), and Deep Sleep and Relax Hypnosis (Mindifi).
Or perhaps your issue is weight loss (although a good therapist will never use that term because if we lose things – like our keys – we tend to spend a long time searching for them and making sure we get them back). Anyway, you can try Easy Weight Loss (Darren Marks hypnotherapy), Weight Loss Hypnosis (Mindifi), Lose Weight Now (Pocket Hypnotherapy range), or Lose Weight with Hypnosis (Shanedude).
For smokers, there are stop smoking apps – and, again, in solution-focused terms, I mean becoming a non-smoker. Remember, no-one goes to anti-war rallies anymore, everyone goes to peace rallies. Anti-war is problem focused, peace is solution focused. There’s Easy Stop Smoking (Darren Marks hypnotherapy), Stop Smoking Now (Pocket Hypnotherapy range), and Quit Smoking Hypnosis and Quit Smoking Hypnosis Pro (Mindifi).
There are some generic self-hypnosis apps. For example Self Hypnosis Tips (bigo), Self Hypnosis (IDJ Group), Hypnosis Session (Andrew Brusentov), Hypnosis (Adam Eason Hypno), and Free Hypnosis/Hypnotherapy and Self Development Audios by Joseph Clough (Free Hypnosis).
And there’s a mixed bag of other hypnotherapy treatments. Darren Marks hypnotherapy provides Freedom From Negative Feelings, Control Alcohol, Freedom From Fears and Phobias, Total Relaxation, Sport and Fitness Excellence, and Healing Hypnosis. The Pocket Hypnotherapy range includes Relax Now, Manage IBS Now, Manage PMS Now, Manage Fear of Flying Now, and Cope with Bereavement Now. Mindifi supplies Law Of Attraction Hypnosis, Free Your Mind Hypnosis, Personal Development Hypnosis, Money and Success Hypnosis, Business Success Hypnosis, and Hypnobirthing Hypnosis. And, to complete the list is Anxiety Free Hypnosis (Hypnosis, Meditation, and Coaching Group)
So no more playing Angry Birds, try a little personal development and relaxation with hypnotherapy apps. Then it’s back to the mainframe world for work.
Now, if you're hypnotized, you're meant to be in a state of heightened suggestibility and responsiveness. So rather than getting you chicken dancing or eating a raw onion, as you might see on a stage show, hypnotherapy can make positive changes, creating new responses, thoughts, attitudes, behaviours, or feelings. So, while you’re waiting for a burger or your friend to turn up, doesn’t it make sense to use an app on your phone to make you a better person?
A lot of people worry about giving presentations and generally feel that they could do with a bit more self-confidence. The good news is that there are hypnosis apps for that. There’s Total Confidence & Success (Darren Marks hypnotherapy), Confidence Now, and Confident Public Speaking Now (Pocket Hypnotherapy range), Free Hypnosis and Self Esteem (Erick Brown Hypnosis), and Automatic Motivation Hypnosis (Mastermind App).
Sleep is another area where people don’t feel they’re getting the right number of hours or the right quality. And here, again, there are apps available. There’s Sleep Deeply (Darren Marks hypnotherapy), Sleep Soundly Hypnosis (Kym Tolson and Hani Al-Qasem), Relax and Sleep Well (Diviniti Publishing Ltd), Sleep Now (Pocket Hypnotherapy range), and Deep Sleep and Relax Hypnosis (Mindifi).
Or perhaps your issue is weight loss (although a good therapist will never use that term because if we lose things – like our keys – we tend to spend a long time searching for them and making sure we get them back). Anyway, you can try Easy Weight Loss (Darren Marks hypnotherapy), Weight Loss Hypnosis (Mindifi), Lose Weight Now (Pocket Hypnotherapy range), or Lose Weight with Hypnosis (Shanedude).
For smokers, there are stop smoking apps – and, again, in solution-focused terms, I mean becoming a non-smoker. Remember, no-one goes to anti-war rallies anymore, everyone goes to peace rallies. Anti-war is problem focused, peace is solution focused. There’s Easy Stop Smoking (Darren Marks hypnotherapy), Stop Smoking Now (Pocket Hypnotherapy range), and Quit Smoking Hypnosis and Quit Smoking Hypnosis Pro (Mindifi).
There are some generic self-hypnosis apps. For example Self Hypnosis Tips (bigo), Self Hypnosis (IDJ Group), Hypnosis Session (Andrew Brusentov), Hypnosis (Adam Eason Hypno), and Free Hypnosis/Hypnotherapy and Self Development Audios by Joseph Clough (Free Hypnosis).
And there’s a mixed bag of other hypnotherapy treatments. Darren Marks hypnotherapy provides Freedom From Negative Feelings, Control Alcohol, Freedom From Fears and Phobias, Total Relaxation, Sport and Fitness Excellence, and Healing Hypnosis. The Pocket Hypnotherapy range includes Relax Now, Manage IBS Now, Manage PMS Now, Manage Fear of Flying Now, and Cope with Bereavement Now. Mindifi supplies Law Of Attraction Hypnosis, Free Your Mind Hypnosis, Personal Development Hypnosis, Money and Success Hypnosis, Business Success Hypnosis, and Hypnobirthing Hypnosis. And, to complete the list is Anxiety Free Hypnosis (Hypnosis, Meditation, and Coaching Group)
So no more playing Angry Birds, try a little personal development and relaxation with hypnotherapy apps. Then it’s back to the mainframe world for work.
Sunday, 3 August 2014
Business continuity planning
It seems strange talking about business continuity planning for mainframe sites because most of them created their plan back in the days when BCP was called DR (Disaster Recovery). And although, for mainframe sites, things don’t seem to have changed to any great extent in perhaps as much as 30 years, the truth is, they have. And it’s a good idea to re-evaluate the Business Continuity Plan now.
In fact, it’s probably a good idea to start from the beginning, in terms of planning, and see what systems you have in place that needs to be available for the organization to continue in business, and how long you can be ‘down’ for. It was often a joke that non-mainframe sites had rooms full of servers running Linux and/or Windows servers and no-one knew what exactly ran on what hardware – and yet, something similar can be the case with mainframes. There is nowadays quite a disconnect between what an end user views as a single transaction and how the subsystems may see it. An end user may simply need to access some data – but, for that to happen, the transaction may start in CICS, access DB2 data, go back to CICS, involve IMS, go back to CICS, access some VSAM files, and finally end up in CICS again. So subsystem-level recovery can lead to confusion.
But let’s start at the beginning. What’s the first thing to do? Identify the business assets that need to be protected, then assess how business critical each process is and create a priority list. Next find the data and technology that’s needed for the business process to occur. Armed with that list, you can set objectives for their recovery, and design strategies and services that can be used to restore access to data for the applications and end users who need them. This is probably easier said than done because it also has to be achieved within time frames that mean your organization stays in business.
You need to be able to run the applications on probably new working processors, you need to get them talking to the latest version of the data, and you need to get your users connected to the applications. The options for how to do this range from cheap to hugely expensive. Like all insurance, you don’t want to have to make use of it, but when you do, you want it to cover everything. So what are your choices? You can do nothing – definitely the cheapest, until things go wrong and then it’s probably the end of the company staying in business. You can use a service bureau or another site. This again is bit like hoping nothing will go wrong, but if it does, you have some way of staying business until you can get your own hardware up and running. You need to ensure the other site is not in the same building or even city – earthquakes and other natural disasters do happen. You could have a cold standby site. This a dearer option, but once everything is powered up, you’re pretty much back in business. The dearest option is the hot standby site, where you basically copy everything to it as it happens on the main site. This hot site can continue running the business for you at a moment’s notice. If you’re a bank or similar, this is what you need. Your users just experience a small hiccough and they continue working. They connect to the new site without realizing anything has changed.
And that is your first big decision over with. The next step is to look at individual systems (such as IMS and CICS) and see how each of those can failover to the back-up site. Look into how you can ensure data is correct, and how in-flight tasks can have their data backed out and the whole task restarted. How quickly can communications be switched across to the back-up site? And what are the chances of both sites being hit by the same disaster?
And then you need to practice the BCP and see what you forgot in your plan. Which pieces of kit do you use that aren’t standard and can’t be replicated? There are so many things that can go wrong at each site because the set-up can be so different (while being superficially so similar). Who has access to the BCP? Who needs access to the BCP? What happens if a key person is doing a charity sleepover in some rundown part of town and hasn’t got a phone with them? What happens if your company is being attacked by terrorists of hacktivists or disgruntled ex-employees?
There’s lots to consider. But the first step is to re-visit your Business Continuity Plan – and do it soon.
In fact, it’s probably a good idea to start from the beginning, in terms of planning, and see what systems you have in place that needs to be available for the organization to continue in business, and how long you can be ‘down’ for. It was often a joke that non-mainframe sites had rooms full of servers running Linux and/or Windows servers and no-one knew what exactly ran on what hardware – and yet, something similar can be the case with mainframes. There is nowadays quite a disconnect between what an end user views as a single transaction and how the subsystems may see it. An end user may simply need to access some data – but, for that to happen, the transaction may start in CICS, access DB2 data, go back to CICS, involve IMS, go back to CICS, access some VSAM files, and finally end up in CICS again. So subsystem-level recovery can lead to confusion.
But let’s start at the beginning. What’s the first thing to do? Identify the business assets that need to be protected, then assess how business critical each process is and create a priority list. Next find the data and technology that’s needed for the business process to occur. Armed with that list, you can set objectives for their recovery, and design strategies and services that can be used to restore access to data for the applications and end users who need them. This is probably easier said than done because it also has to be achieved within time frames that mean your organization stays in business.
You need to be able to run the applications on probably new working processors, you need to get them talking to the latest version of the data, and you need to get your users connected to the applications. The options for how to do this range from cheap to hugely expensive. Like all insurance, you don’t want to have to make use of it, but when you do, you want it to cover everything. So what are your choices? You can do nothing – definitely the cheapest, until things go wrong and then it’s probably the end of the company staying in business. You can use a service bureau or another site. This again is bit like hoping nothing will go wrong, but if it does, you have some way of staying business until you can get your own hardware up and running. You need to ensure the other site is not in the same building or even city – earthquakes and other natural disasters do happen. You could have a cold standby site. This a dearer option, but once everything is powered up, you’re pretty much back in business. The dearest option is the hot standby site, where you basically copy everything to it as it happens on the main site. This hot site can continue running the business for you at a moment’s notice. If you’re a bank or similar, this is what you need. Your users just experience a small hiccough and they continue working. They connect to the new site without realizing anything has changed.
And that is your first big decision over with. The next step is to look at individual systems (such as IMS and CICS) and see how each of those can failover to the back-up site. Look into how you can ensure data is correct, and how in-flight tasks can have their data backed out and the whole task restarted. How quickly can communications be switched across to the back-up site? And what are the chances of both sites being hit by the same disaster?
And then you need to practice the BCP and see what you forgot in your plan. Which pieces of kit do you use that aren’t standard and can’t be replicated? There are so many things that can go wrong at each site because the set-up can be so different (while being superficially so similar). Who has access to the BCP? Who needs access to the BCP? What happens if a key person is doing a charity sleepover in some rundown part of town and hasn’t got a phone with them? What happens if your company is being attacked by terrorists of hacktivists or disgruntled ex-employees?
There’s lots to consider. But the first step is to re-visit your Business Continuity Plan – and do it soon.
Sunday, 27 July 2014
Gamification of health
We all know playing games can be fun, and we all know staying healthy is important. (I write this, ironically, as I bite into a doughnut!). Doesn’t it make absolute sense to use the most effective parts of gamification to encourage people to live healthy lives – to eat more healthily, to take exercise, to keep their weight at a healthy level, to reduce anxiety, to ‘play’ their way out of depressive thoughts, to overcome a phobia, to beat those OCD habits, etc. But is that a reality?
“Gaming to Engage the Healthcare Consumer”, published by ICF International defines gamification as “the application of game elements and digital game design techniques to everyday problems such as business dilemmas and social challenges”. They cite the Gartner report, “Gamification: Engagement Strategies for Business and IT”, that by 2015, 50 percent of organizations will be using gamification of some kind, and, in 2016, businesses will spend $2.6 billion on gamification.
The ICF report suggests that the trend towards value-based care, the increasing role of the patient as consumer, and the millennial generation as desirable health insurance customers are driving healthcare organizations to look at gamification. And this is all made possible by the huge number of smartphones and tablets that potential gamers own.
However, the Gartner report’s headline figure was that 80 percent of current gamified applications will fail to meet business objectives primarily due to poor design. They go on to say that: “While game mechanics such as points and badges are the hallmarks of gamification, the real challenge is to design player-centric applications that focus on the motivations and rewards that truly engage players more fully. Game mechanics like points, badges, and leader boards are simply the tools that implement the underlying engagement models.” Keeping players engaged, what they call “stickiness” in the trade, is a big challenge for any company gamifying health.
So, what healthy games are available? According to “From Fitbit to Fitocracy: The Rise of Health Care Gamification” at https://knowledge.wharton.upenn.edu/article/from-fitbit-to-fitocracy-the-rise-of-health-care-gamification/, UnitedHealth Group has OptumizeMe, an app that lets users engage in fitness-related contests with their friends. They’re also testing Join For Me, an app encouraging obese teenagers at risk of developing diabetes to play video games that require dancing or other physical activities. MeYou Health has a rewards program for people who complete one health-related task per day.
GymPact uses GPS to track its users to the gym. Members meeting their workout goals win cash, which comes from people paying penalties for failing to exercise as promised. Fitbit has wireless tracking devices that sync to smartphones and computers, allowing users to track their fitness activities. Fitocracy is a social network, where people track their workouts, challenge friends to exercise contests, and earn recognition for meeting goals. SuperBetter Labs is beta testing an online social game designed to help people coping with illnesses, injuries, or depression.
Tom Chivers’ blog at http://blogs.telegraph.co.uk/news/tomchiversscience/100274676/the-apps-that-will-save-your-life-or-not lists Runkeeper, Nike Run, and Fitocracy as apps that reward you for taking exercise, with extra points for the numbers of steps taken. There’s DietBet and Skinnyo for weight-loss and calorie counting. Sleep Cycle encourages you to get more and better sleep. He suggests that there are apps to make a game of physiotherapy, apps for people with autism, for people with dyslexia, even for pain management for burns. The NHS in the UK has a BMI calculator app.
And that’s pretty much where we are now. Everyone thinks gamification is a great idea to make mundane activities more fun. But, and this is a big ‘but’, just saying something is gamified doesn’t mean that people will come back and use it again and again. We’ve all got apps on our phones and tablets that seemed like a good idea to download when we downloaded them, and they haven’t been used much since that time. A good games app has to engage people. In addition, people have to get some value out of it, such as better health. It would be nice to think that after using the app people have learned something or modified their behaviour in a positive way. Finding programmers who can make this happen is also a challenge.
If only Angry Birds helped you lose weight, cut down on your alcohol consumption, and take more exercise! But if someone finds a way to achieve that, they are on to a winner that we’ll all benefit from.
“Gaming to Engage the Healthcare Consumer”, published by ICF International defines gamification as “the application of game elements and digital game design techniques to everyday problems such as business dilemmas and social challenges”. They cite the Gartner report, “Gamification: Engagement Strategies for Business and IT”, that by 2015, 50 percent of organizations will be using gamification of some kind, and, in 2016, businesses will spend $2.6 billion on gamification.
The ICF report suggests that the trend towards value-based care, the increasing role of the patient as consumer, and the millennial generation as desirable health insurance customers are driving healthcare organizations to look at gamification. And this is all made possible by the huge number of smartphones and tablets that potential gamers own.
However, the Gartner report’s headline figure was that 80 percent of current gamified applications will fail to meet business objectives primarily due to poor design. They go on to say that: “While game mechanics such as points and badges are the hallmarks of gamification, the real challenge is to design player-centric applications that focus on the motivations and rewards that truly engage players more fully. Game mechanics like points, badges, and leader boards are simply the tools that implement the underlying engagement models.” Keeping players engaged, what they call “stickiness” in the trade, is a big challenge for any company gamifying health.
So, what healthy games are available? According to “From Fitbit to Fitocracy: The Rise of Health Care Gamification” at https://knowledge.wharton.upenn.edu/article/from-fitbit-to-fitocracy-the-rise-of-health-care-gamification/, UnitedHealth Group has OptumizeMe, an app that lets users engage in fitness-related contests with their friends. They’re also testing Join For Me, an app encouraging obese teenagers at risk of developing diabetes to play video games that require dancing or other physical activities. MeYou Health has a rewards program for people who complete one health-related task per day.
GymPact uses GPS to track its users to the gym. Members meeting their workout goals win cash, which comes from people paying penalties for failing to exercise as promised. Fitbit has wireless tracking devices that sync to smartphones and computers, allowing users to track their fitness activities. Fitocracy is a social network, where people track their workouts, challenge friends to exercise contests, and earn recognition for meeting goals. SuperBetter Labs is beta testing an online social game designed to help people coping with illnesses, injuries, or depression.
Tom Chivers’ blog at http://blogs.telegraph.co.uk/news/tomchiversscience/100274676/the-apps-that-will-save-your-life-or-not lists Runkeeper, Nike Run, and Fitocracy as apps that reward you for taking exercise, with extra points for the numbers of steps taken. There’s DietBet and Skinnyo for weight-loss and calorie counting. Sleep Cycle encourages you to get more and better sleep. He suggests that there are apps to make a game of physiotherapy, apps for people with autism, for people with dyslexia, even for pain management for burns. The NHS in the UK has a BMI calculator app.
And that’s pretty much where we are now. Everyone thinks gamification is a great idea to make mundane activities more fun. But, and this is a big ‘but’, just saying something is gamified doesn’t mean that people will come back and use it again and again. We’ve all got apps on our phones and tablets that seemed like a good idea to download when we downloaded them, and they haven’t been used much since that time. A good games app has to engage people. In addition, people have to get some value out of it, such as better health. It would be nice to think that after using the app people have learned something or modified their behaviour in a positive way. Finding programmers who can make this happen is also a challenge.
If only Angry Birds helped you lose weight, cut down on your alcohol consumption, and take more exercise! But if someone finds a way to achieve that, they are on to a winner that we’ll all benefit from.
Labels:
blog,
Eddolls,
gamification,
Gartner,
health,
ICF International,
Tom Chivers
Sunday, 20 July 2014
IBM and Apple deal
What a surprise! IBM and Apple have announced that they are working together. Who’d have thought it? Would it have happened under Steve Jobs leadership? Will it work? The two companies are planning to co-develop business-centric apps for the iPhone and iPad. And IBM is going to sell Apple’s mobile devices pre-installed with the new software to its business clients.
People are suggesting that IBM now has special access rights to certain security features on the devices and that other companies don’t have that kind of access. As a consequence, IBM can supply apps and services that are similar in behaviour to what users of Microsoft devices would expect. What hasn’t been made clear is what the financial arrangements are and what apps are going to be produced.
It seems that the deal is one that favours Apple. After all, they have a smaller part of the smartphone and tablet market worldwide than Android. According to IDC, Android will have about 60 percent of the smartphone market and Apple less than 20 percent. And Gartner are suggesting that Android has over 60 percent of the tablet market with Apple shrinking year-on-year with about 30 percent. And, after all the things Apple have said over the years, it seems an unlikely combination.
Maybe mainframe users will choose to use an Apple tablet and boost the flagging Apple sales that way. It seems hardly likely that a tablet user will rush out and buy a mainframe! Hence my conclusion that the relationship is very asymmetrical and favours Apple hugely more than IBM. Or, thinking the unthinkable (again), is Big Blue looking to take over Apple at some stage in the future – feeling that it can provide customers with an alternative to Microsoft and Android?
Or, perhaps, IBM looked in the mirror and saw itself 50 years ago, being able to dictate what software ran on its hardware and generally disregarding what every other company was doing as it stood in powerful isolation. And we know how that turns out.
I could make a prediction here that in three years’ time Apple will be a division of IBM. I could make a prediction, but predictions are notoriously unreliable. For example, Steve Ballmer, writing in USA Today, 30 April 2007, said: “There’s no chance that the iPhone is going to get any significant market share”. Or Thomas Watson, chairman of IBM, who in 1943 said: “I think there is a world market for maybe five computers”
There are more of these unfortunate predictions. Ken Olson, president, chairman, and founder of Digital Equipment Corp in 1977 said: “There is no reason anyone would want a computer in their home”. Or Bill Gates, who in 1981 is meant to have said: “640K ought to be enough for anybody” – although that one probably isn’t true.
Robert Metcalfe, the inventor of Ethernet, writing in InfoWorld magazine in December 1995 said; “I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse”. An engineer at the Advanced Computing Systems Division of IBM, in 1968, said about the microchip: “But what...is it good for?” Or the editor in charge of business books for Prentice Hall said in 1957: “I have travelled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won’t last out the year”.
These are predictions that are right up there with H M Warner of Warner Brothers in 1927 saying: “Who the hell wants to hear actors talk?” Or Decca Recording Co rejecting the Beatles in 1962 by saying: “We don’t like their sound, and guitar music is on the way out”. And, of course, journalist Stewart Alsop Jr back in 1991, predicting that the last mainframe would be unplugged by 15 March 1996.
Best not to make predictions, or at least not to publish them, don’t you think? But are we looking at Apple’s last days as an independent company?
People are suggesting that IBM now has special access rights to certain security features on the devices and that other companies don’t have that kind of access. As a consequence, IBM can supply apps and services that are similar in behaviour to what users of Microsoft devices would expect. What hasn’t been made clear is what the financial arrangements are and what apps are going to be produced.
It seems that the deal is one that favours Apple. After all, they have a smaller part of the smartphone and tablet market worldwide than Android. According to IDC, Android will have about 60 percent of the smartphone market and Apple less than 20 percent. And Gartner are suggesting that Android has over 60 percent of the tablet market with Apple shrinking year-on-year with about 30 percent. And, after all the things Apple have said over the years, it seems an unlikely combination.
Maybe mainframe users will choose to use an Apple tablet and boost the flagging Apple sales that way. It seems hardly likely that a tablet user will rush out and buy a mainframe! Hence my conclusion that the relationship is very asymmetrical and favours Apple hugely more than IBM. Or, thinking the unthinkable (again), is Big Blue looking to take over Apple at some stage in the future – feeling that it can provide customers with an alternative to Microsoft and Android?
Or, perhaps, IBM looked in the mirror and saw itself 50 years ago, being able to dictate what software ran on its hardware and generally disregarding what every other company was doing as it stood in powerful isolation. And we know how that turns out.
I could make a prediction here that in three years’ time Apple will be a division of IBM. I could make a prediction, but predictions are notoriously unreliable. For example, Steve Ballmer, writing in USA Today, 30 April 2007, said: “There’s no chance that the iPhone is going to get any significant market share”. Or Thomas Watson, chairman of IBM, who in 1943 said: “I think there is a world market for maybe five computers”
There are more of these unfortunate predictions. Ken Olson, president, chairman, and founder of Digital Equipment Corp in 1977 said: “There is no reason anyone would want a computer in their home”. Or Bill Gates, who in 1981 is meant to have said: “640K ought to be enough for anybody” – although that one probably isn’t true.
Robert Metcalfe, the inventor of Ethernet, writing in InfoWorld magazine in December 1995 said; “I predict the Internet will soon go spectacularly supernova and in 1996 catastrophically collapse”. An engineer at the Advanced Computing Systems Division of IBM, in 1968, said about the microchip: “But what...is it good for?” Or the editor in charge of business books for Prentice Hall said in 1957: “I have travelled the length and breadth of this country and talked with the best people, and I can assure you that data processing is a fad that won’t last out the year”.
These are predictions that are right up there with H M Warner of Warner Brothers in 1927 saying: “Who the hell wants to hear actors talk?” Or Decca Recording Co rejecting the Beatles in 1962 by saying: “We don’t like their sound, and guitar music is on the way out”. And, of course, journalist Stewart Alsop Jr back in 1991, predicting that the last mainframe would be unplugged by 15 March 1996.
Best not to make predictions, or at least not to publish them, don’t you think? But are we looking at Apple’s last days as an independent company?
Labels:
Apple,
Bill Gates,
blog,
Eddolls,
H M Warner,
IBM,
Ken Olson,
Robert Metcalfe,
Steve Ballmer,
Stewart Alsop Jr,
Thomas Watson
Sunday, 13 July 2014
400 blogs
This is my 400th blog on this site, and I thought it was enough of a milestone to deserve some sort of recognition. And I thought it would be an opportunity to look back on all the things that have happened since that very first blog back in June 2006. In truth, I have published some guest blogs – so not all 400 have been written by me. But, I’ve also written blogs that have been published under other people’s names on a variety of sites, and I’ve had nearly 40 blogs published on the Destination z Web site.
Back in 2006, I was doing a lot of work for Xephon – I was producing and editing those much-loved Update journals. You probably remember MVS (later z/OS) Update, CICS Update – the very first one – DB2 Update, SNA (later TCP/SNA) Update, RACF Update, and WebSphereMQ Update. My very first blog was on the Mainframe Weekly blog site and was called “What’s going on with CICS?”. The first paragraph read:
What do I mean, what’s going on with CICS? Well, CICS used to be the dynamic heart of so many companies – it was the subsystem that allowed the company to make money – and as such there were lots of third parties selling add-ons to CICS to make it work better for individual organizations.
And over the months that followed, I talked about AJAX, Web 2.0, Project ECLipz, Aglets (DB2 agent applets), social networking, back-ups and archives, new versions of CICS, DB2, and IMS, and significant birthdays for software. I blogged about mash-ups using IMS, I gave a number of CSS tips, I wrote about BPEL, I even discussed PST files and the arrival of the Chrome browser. And back in November 2008 I first looked at cloud computing.
In 2009 I talked about CICS Explorer, Twitter, cloud computing, specialty processors, zPrime, mainframe apprentices, that year’s GSE conference, IBM’s London Analytics Solution Centre, more anniversaries and software updates, and much more.
2010 saw more blogs about the recession, IBM versus Oracle, social media, Linux, clouds, performance, the zEnterprise, some thoughts about SharePoint, Android, and connecting to your mainframe from a phone, SyslogD, the GSE conference, and lots of other thoughts on the events of the year.
2011 had a lot of blogs about cloud computing and virtual user groups, as well as more about SharePoint. The SharePoint blogs were also published on the Medium Sized Business Blog part of TechNet Blogs (http://blogs.technet.com/b/mediumbusiness/). I also had a serious look at tablets. And wrote the “What’s a mainframe, Daddy?” blog. I had a look at IMS costs, mainframe maintenance, and Web 3.0 and Facebook (with the use of OpenGraph). I also examined gamification and augmented reality and what they meant for the future of software.
In 2012 I mentioned IBM Docs, how to create an e-book, BYOD (Bring Your Own Device), operating systems on a memory stick, cloud wars, and using the Dojo Toolkit to make the end user experience of CICS nicer, and more friendly (of course). There was talk of RIM, Hadoop, IOD, and Lotus.
2013 saw quite a few blogs about big data. My Lync and Yammer blog was republished on the IT Central Web site. And I looked at social media, bitcoins, and push technology, as well as IBM’s new mainframe and much else.
So far in 2014, we’ve covered more about big data and enterprise social networks, we’ve looked at NoSQL, Software Defined everything, and our old friends REXX and VM/CMS, and a lot more besides.
Over the years there have been frequent blogs about the Arcati Mainframe Yearbook, and in particular its user survey results.
Are the blogs any good? Well, over the years they have gained various awards and quite a few have been republished on a number of different Web sites, where they’ve been getting positive reviews and plenty of hits.
You can read my blogs at mainframeupdate.blogspot.com, and it.toolbox.com/blogs/mainframe-world/. You can follow on Twitter at twitter.com/t_eddolls, or on Facebook at fb.com/iTechEd – and we appreciate you ‘LIKEing’ the page.
What about the future? The blogs will continue and, as usual, I’ll mainly focus on what’s happening with the mainframe industry, but I think it’s important to take a wider view and keep abreast of new IT technologies and ideas as they happen and try to put them in context and give my evaluation of them.
If you have read all 400 – thank you. If this is the first one you’ve read, then hopefully you’ll be back again next week for more!
Trevor Eddolls
IBM Champion
Back in 2006, I was doing a lot of work for Xephon – I was producing and editing those much-loved Update journals. You probably remember MVS (later z/OS) Update, CICS Update – the very first one – DB2 Update, SNA (later TCP/SNA) Update, RACF Update, and WebSphereMQ Update. My very first blog was on the Mainframe Weekly blog site and was called “What’s going on with CICS?”. The first paragraph read:
What do I mean, what’s going on with CICS? Well, CICS used to be the dynamic heart of so many companies – it was the subsystem that allowed the company to make money – and as such there were lots of third parties selling add-ons to CICS to make it work better for individual organizations.
And over the months that followed, I talked about AJAX, Web 2.0, Project ECLipz, Aglets (DB2 agent applets), social networking, back-ups and archives, new versions of CICS, DB2, and IMS, and significant birthdays for software. I blogged about mash-ups using IMS, I gave a number of CSS tips, I wrote about BPEL, I even discussed PST files and the arrival of the Chrome browser. And back in November 2008 I first looked at cloud computing.
In 2009 I talked about CICS Explorer, Twitter, cloud computing, specialty processors, zPrime, mainframe apprentices, that year’s GSE conference, IBM’s London Analytics Solution Centre, more anniversaries and software updates, and much more.
2010 saw more blogs about the recession, IBM versus Oracle, social media, Linux, clouds, performance, the zEnterprise, some thoughts about SharePoint, Android, and connecting to your mainframe from a phone, SyslogD, the GSE conference, and lots of other thoughts on the events of the year.
2011 had a lot of blogs about cloud computing and virtual user groups, as well as more about SharePoint. The SharePoint blogs were also published on the Medium Sized Business Blog part of TechNet Blogs (http://blogs.technet.com/b/mediumbusiness/). I also had a serious look at tablets. And wrote the “What’s a mainframe, Daddy?” blog. I had a look at IMS costs, mainframe maintenance, and Web 3.0 and Facebook (with the use of OpenGraph). I also examined gamification and augmented reality and what they meant for the future of software.
In 2012 I mentioned IBM Docs, how to create an e-book, BYOD (Bring Your Own Device), operating systems on a memory stick, cloud wars, and using the Dojo Toolkit to make the end user experience of CICS nicer, and more friendly (of course). There was talk of RIM, Hadoop, IOD, and Lotus.
2013 saw quite a few blogs about big data. My Lync and Yammer blog was republished on the IT Central Web site. And I looked at social media, bitcoins, and push technology, as well as IBM’s new mainframe and much else.
So far in 2014, we’ve covered more about big data and enterprise social networks, we’ve looked at NoSQL, Software Defined everything, and our old friends REXX and VM/CMS, and a lot more besides.
Over the years there have been frequent blogs about the Arcati Mainframe Yearbook, and in particular its user survey results.
Are the blogs any good? Well, over the years they have gained various awards and quite a few have been republished on a number of different Web sites, where they’ve been getting positive reviews and plenty of hits.
You can read my blogs at mainframeupdate.blogspot.com, and it.toolbox.com/blogs/mainframe-world/. You can follow on Twitter at twitter.com/t_eddolls, or on Facebook at fb.com/iTechEd – and we appreciate you ‘LIKEing’ the page.
What about the future? The blogs will continue and, as usual, I’ll mainly focus on what’s happening with the mainframe industry, but I think it’s important to take a wider view and keep abreast of new IT technologies and ideas as they happen and try to put them in context and give my evaluation of them.
If you have read all 400 – thank you. If this is the first one you’ve read, then hopefully you’ll be back again next week for more!
Trevor Eddolls
IBM Champion
Sunday, 6 July 2014
Inside Big Data
Everyone is talking about big data, but sometimes the things you hear people say aren’t always strictly accurate. Adaptive Computing’s Al Nugent, who co-wrote “Big Data for Dummies” (Wiley, 2013) has written a blog called “Big Data: Facts and Myths” at http://www.adaptivecomputing.com/blog-hpc/big-data-facts-myths/ – I thought it would be interesting to hear what he has to say.
He says: “there has been an explosion in the interest around big data (and big analytics and big workflow). While the interest, and concomitant marketing, has been exploding, big data implementations have proceeded at a relatively normal pace.” He goes on to say: “One fact substantiated by the current adoption rate is big data is not a single technology but a combination of old and new technologies and that the overarching purpose is to provide actionable insights. In practice, big data is the ability to manage huge volumes of disparate data, at the right speed and within the right time frame to allow real-time analysis and reaction. The original characterization of big data was built on the 3 Vs:
“Another fact is the limitation of this list. Over the course of the past year or so others have chosen to expand the list of Vs. The two most common add-ons are Value and Visualization. Value, sometimes called Veracity, is a measure of how appropriate the data is in the analytical context and is it delivering on expectations. How accurate is that data in predicting business value? Do the results of a big data analysis actually make sense? Visualization is the ability to easily ‘see’ the value. One needs to be able to quickly represent and interpret the data and this often requires sophisticated dashboards or other visual representations.
“A third fact is big data, analytics and workflow is really hard. Since big data incorporates all data, including structured data and unstructured data from e-mail, social media, text streams, sensors, and more, basic practices around data management and governance need to adapt. Sometimes, these changes are more difficult than the technology changes.
“One of the most popular myths is the ‘newness’ of big data. For many in the technology community, big data is just a new name for what they have been doing for years. Certainly some of the fundamentals are different, but the requirement to make sense of large amounts of information and present it in a manner easily consumable by non-technology people has been with us since the beginning of the computer era.
“Another myth is a derivative of the newness myth: you need to dismiss the ‘old database’ people and hire a whole new group of people to derive value from the adoption of big data. Even on the surface this is foolhardy. Unless one has a green field technology/business environment, the approach to staffing will be hybridized. The percentage of new to existing will vary based on the size of the business, customer base, transaction levels, etc.
“Yet another myth concerns the implementation rate of big data projects. There are some who advocate dropping in a Hadoop cluster and going for it. ‘We have to move fast! Our competition is outpacing us!’ While intrepid, this is doomed to failure for reasons too numerous for this writing. Like any other IT initiative, the creation of big data solutions need to be planned, prototyped, designed, tested, and deployed with care.”
I thought Al’s comments were very interesting and worth sharing. You can find out more at Adaptive Computing’s Web site.
He says: “there has been an explosion in the interest around big data (and big analytics and big workflow). While the interest, and concomitant marketing, has been exploding, big data implementations have proceeded at a relatively normal pace.” He goes on to say: “One fact substantiated by the current adoption rate is big data is not a single technology but a combination of old and new technologies and that the overarching purpose is to provide actionable insights. In practice, big data is the ability to manage huge volumes of disparate data, at the right speed and within the right time frame to allow real-time analysis and reaction. The original characterization of big data was built on the 3 Vs:
- Volume: the sheer amount of data
- Velocity: how fast data needs to be ingested or processed
- Variety: how diverse is the data? Is it structured, unstructured, machine data, etc.
“Another fact is the limitation of this list. Over the course of the past year or so others have chosen to expand the list of Vs. The two most common add-ons are Value and Visualization. Value, sometimes called Veracity, is a measure of how appropriate the data is in the analytical context and is it delivering on expectations. How accurate is that data in predicting business value? Do the results of a big data analysis actually make sense? Visualization is the ability to easily ‘see’ the value. One needs to be able to quickly represent and interpret the data and this often requires sophisticated dashboards or other visual representations.
“A third fact is big data, analytics and workflow is really hard. Since big data incorporates all data, including structured data and unstructured data from e-mail, social media, text streams, sensors, and more, basic practices around data management and governance need to adapt. Sometimes, these changes are more difficult than the technology changes.
“One of the most popular myths is the ‘newness’ of big data. For many in the technology community, big data is just a new name for what they have been doing for years. Certainly some of the fundamentals are different, but the requirement to make sense of large amounts of information and present it in a manner easily consumable by non-technology people has been with us since the beginning of the computer era.
“Another myth is a derivative of the newness myth: you need to dismiss the ‘old database’ people and hire a whole new group of people to derive value from the adoption of big data. Even on the surface this is foolhardy. Unless one has a green field technology/business environment, the approach to staffing will be hybridized. The percentage of new to existing will vary based on the size of the business, customer base, transaction levels, etc.
“Yet another myth concerns the implementation rate of big data projects. There are some who advocate dropping in a Hadoop cluster and going for it. ‘We have to move fast! Our competition is outpacing us!’ While intrepid, this is doomed to failure for reasons too numerous for this writing. Like any other IT initiative, the creation of big data solutions need to be planned, prototyped, designed, tested, and deployed with care.”
I thought Al’s comments were very interesting and worth sharing. You can find out more at Adaptive Computing’s Web site.
Sunday, 29 June 2014
Thinking the unthinkable – alternatives to Microsoft
Office 365 with its cloud-based solution to all your office needs seems like a mature and all-encompassing way of moving IT forward at many organizations. But what if your organization isn’t big enough to justify the price tag of the Enterprise version of Office 365, what if you’re a school, for example, what other choices do you have? Well, let’ take a look at some Open Source alternatives.
The obvious first place to look is Google Apps for Business. It’s not free, it costs $5 per user per month, or $50 per user per year. Google’s definition of a user is a distinct Gmail inbox. Everyone gets 30GB of Drive space, as well as Docs, Sheets, and Slides. Documents created in Drive can be shared both individually or by organization. Google Sites lets you create Web sites. Google Apps Vault is used to keep track of information created, stored, sent, and received through an organization’s Google Apps programs. You can access Apps for Business from mobile devices using Google’s mobile apps. One of the best apps for this is arguably the QuickOffice app for Android and iOS, which allows users. QuickOffice can edit Microsoft Office files stored in Drive. If you are a school, Google Apps for Education is completely free and has the new ‘Classroom’ product coming soon.
There’s also Zoho, which provides the standard office tools as well as a Campaigns tool, which lets you create e-mail and social campaigns for forthcoming events. Then there’s Free Office, which needs Java. And there’s OX, which offers files, e-mail, address book, calendar, tasks, plus a social media portal.
If you’re looking for an alternative to Outlook, there’s obviously Gmail, Yahoo, and Outlook (Hotmail). For Open Source alternatives, there’s Thunderbird, which comes with the numerous Add-ons, eg Lightening, its calendar software. There’s also Zimbra Desktop. It can access e-mail and data from VMware Zimbra, cloud e-mail like Gmail, and social networks like Facebook. And there’s Incredimail, but it doesn’t have a calendar. And finally there’s the Opera Email Client.
If you want an alternative to SharePoint then, probably, your first choice is Google Cloud Connect. This simple plug-in connects you to Google Docs and let you collaborate with other people. Edited documents are automatically sync’ed and sent to other team members. Or you might look at Alfresco. This free platform allows users to collaborate on documents and interact with others. There’s also Samepage, which comes with a paid for option. Or you could try Liferay Social Office Community Edition (CE). This is a downloadable desktop application. And there’s Nuxeo Open-Source CMS.
If you’re looking for an intranet, then there’s Mindtouch core, which seems to get good reviews. Alternatives include PBWiki, which includes a wiki, and is hosted by them and isn’t free for businesses. There’s GlassCubes, an online collaborative workspace, which, again has a cost implication. There’s Plone, a content management system built on Zope. There’s also Brushtail, HyperGate, Open Atrium, and Twiki.
The bottom line is that if you have power users, who are using lots of the features of Word, Excel, and PowerPoint, then you need those products. If your users are only scratching the surface of what you can do with Microsoft’s Office suite, then it makes sense to choose a much cheaper alternative. If you can use alternatives to Office, then you can probably start to think about using alternatives to other Microsoft products. Perhaps you can live without Outlook for e-mail and calendar. Maybe you’ve never really made the most of SharePoint and you could use an Open Source alternative for file sharing and running your intranet.
The issue is this: you can save huge amounts of money by using Open Source products rather than Office 365, but you will need to spend time learning how to use each ‘best of breed’ alternative and how to integrate it with the other products. That will take up someone’s time. Once you’ve weighed up the pros and cons, you can make a decision about whether to keep the faith and stay with Microsoft and have a great CV for another job using Microsoft products, or whether to save money and spend lots of time as you take your first steps into the wilderness. But what you’ll find is that wilderness is quite full of people who’ve also stepped away from the easy choice.
What will you do?
The obvious first place to look is Google Apps for Business. It’s not free, it costs $5 per user per month, or $50 per user per year. Google’s definition of a user is a distinct Gmail inbox. Everyone gets 30GB of Drive space, as well as Docs, Sheets, and Slides. Documents created in Drive can be shared both individually or by organization. Google Sites lets you create Web sites. Google Apps Vault is used to keep track of information created, stored, sent, and received through an organization’s Google Apps programs. You can access Apps for Business from mobile devices using Google’s mobile apps. One of the best apps for this is arguably the QuickOffice app for Android and iOS, which allows users. QuickOffice can edit Microsoft Office files stored in Drive. If you are a school, Google Apps for Education is completely free and has the new ‘Classroom’ product coming soon.
There’s also Zoho, which provides the standard office tools as well as a Campaigns tool, which lets you create e-mail and social campaigns for forthcoming events. Then there’s Free Office, which needs Java. And there’s OX, which offers files, e-mail, address book, calendar, tasks, plus a social media portal.
If you’re looking for an alternative to Outlook, there’s obviously Gmail, Yahoo, and Outlook (Hotmail). For Open Source alternatives, there’s Thunderbird, which comes with the numerous Add-ons, eg Lightening, its calendar software. There’s also Zimbra Desktop. It can access e-mail and data from VMware Zimbra, cloud e-mail like Gmail, and social networks like Facebook. And there’s Incredimail, but it doesn’t have a calendar. And finally there’s the Opera Email Client.
If you want an alternative to SharePoint then, probably, your first choice is Google Cloud Connect. This simple plug-in connects you to Google Docs and let you collaborate with other people. Edited documents are automatically sync’ed and sent to other team members. Or you might look at Alfresco. This free platform allows users to collaborate on documents and interact with others. There’s also Samepage, which comes with a paid for option. Or you could try Liferay Social Office Community Edition (CE). This is a downloadable desktop application. And there’s Nuxeo Open-Source CMS.
If you’re looking for an intranet, then there’s Mindtouch core, which seems to get good reviews. Alternatives include PBWiki, which includes a wiki, and is hosted by them and isn’t free for businesses. There’s GlassCubes, an online collaborative workspace, which, again has a cost implication. There’s Plone, a content management system built on Zope. There’s also Brushtail, HyperGate, Open Atrium, and Twiki.
The bottom line is that if you have power users, who are using lots of the features of Word, Excel, and PowerPoint, then you need those products. If your users are only scratching the surface of what you can do with Microsoft’s Office suite, then it makes sense to choose a much cheaper alternative. If you can use alternatives to Office, then you can probably start to think about using alternatives to other Microsoft products. Perhaps you can live without Outlook for e-mail and calendar. Maybe you’ve never really made the most of SharePoint and you could use an Open Source alternative for file sharing and running your intranet.
The issue is this: you can save huge amounts of money by using Open Source products rather than Office 365, but you will need to spend time learning how to use each ‘best of breed’ alternative and how to integrate it with the other products. That will take up someone’s time. Once you’ve weighed up the pros and cons, you can make a decision about whether to keep the faith and stay with Microsoft and have a great CV for another job using Microsoft products, or whether to save money and spend lots of time as you take your first steps into the wilderness. But what you’ll find is that wilderness is quite full of people who’ve also stepped away from the easy choice.
What will you do?
Labels:
Alfresco,
blog,
Eddolls,
Free Office,
Gmail,
Google Apps,
Liferay,
Microsoft,
Mindtouch,
Office 365,
Outlook,
Thunderbird,
Zimbra Desktop,
Zoho
Sunday, 22 June 2014
Plus ca change
I’ve worked with mainframes for over 30 years and I’m used to seeing trends moving in one direction and then, a few years later, going in the opposite direction. Each initiative gets sold to us as something completely new and the solution that we’ve been waiting. I imagine you share my experience. I originally worked on green screens with all the processing taking place on the mainframe. In fact, I can remember decks of cards being punched and fed into the mainframe. I can remember the excitement of everyone having their own little computer when PCs first came out. I can remember client/server being the ultimate answer to all our computing issues. Outsourcing, not outsourcing – we could wander down Memory Lane like this for a long time.
What always amazes me is when I’m working with sites that are predominantly Windows-based, and they still get that frisson of excitement over an idea that I think is pretty commonplace. It was only a few years ago (well maybe about five) that the Windows IT teams were all excited about VMware and the ability to virtualize hardware. They couldn’t believe mainframes had been doing that since the 1960s.
Then there was the excitement about using Citrix and giving users simple Linux terminals rather than more expensive PCs. Citrix have a host of products, including GoToMeeting – their conferencing software. With Citrix desktop solutions, all the applications live on the server rather than on each individual computer. It means you can launch a browser on your laptop, smartphone, tablet, or whatever device you like that has a browser, and see a Windows-looking desktop and all the usual applications. So, it’s just like a dumb terminal connecting to a mainframe, which does all the work and looks after all the data storage. Nothing new there!
And now Microsoft are selling Office 365, which, once you’ve paid your money, means that all the applications live in the cloud somewhere, and so does the data. It seems that all subscribers are like remote users, dialling into an organization’s mainframe that could be located in a different country or on a different continent. Looked at another way, IT departments are in many ways outsourcing their responsibilities – and we all remember when outsourcing was on everyone’s mind.
Office 365 seems like a very mature product and one whose time is about to come. You get more than just the familiar Office products like Word and Excel. You get SharePoint, Lync, and Exchange (and I’m talking about the Enterprise version of Office 365). Lync lets users chat to each other – a bit like using MSN used to. And SharePoint provides you with an intranet as well file and document management capabilities. You get Outlook, Publisher (my least-favourite piece of software), Access (the database), and InfoPath (used for electronic forms). You also get a nicely integrated Yammer – Microsoft’s Enterprise Social Networking (ESN) tool. There’s also PowerBI, a suite of business intelligence and self-serve data mining tools coming soon. This will integrate with Excel, so users can use the Power Query tool to create spreadsheets and graphs using public and private data, and also perform geovisualization with Bing Maps data using the Power Map tool.
And while the actual tools that are available on these different platforms and computing models, over time, are different, it’s the computing concepts that I’m suggesting come and go and come again, and go again! It’s like a battle between centralization and decentralization. Everyone likes to have that computing power on their phone or tablet, but whenever you need to do some real work, you connect (usually using a browser) to a distant computing monolith. So, plus ça change, plus c’est la même chose.
What always amazes me is when I’m working with sites that are predominantly Windows-based, and they still get that frisson of excitement over an idea that I think is pretty commonplace. It was only a few years ago (well maybe about five) that the Windows IT teams were all excited about VMware and the ability to virtualize hardware. They couldn’t believe mainframes had been doing that since the 1960s.
Then there was the excitement about using Citrix and giving users simple Linux terminals rather than more expensive PCs. Citrix have a host of products, including GoToMeeting – their conferencing software. With Citrix desktop solutions, all the applications live on the server rather than on each individual computer. It means you can launch a browser on your laptop, smartphone, tablet, or whatever device you like that has a browser, and see a Windows-looking desktop and all the usual applications. So, it’s just like a dumb terminal connecting to a mainframe, which does all the work and looks after all the data storage. Nothing new there!
And now Microsoft are selling Office 365, which, once you’ve paid your money, means that all the applications live in the cloud somewhere, and so does the data. It seems that all subscribers are like remote users, dialling into an organization’s mainframe that could be located in a different country or on a different continent. Looked at another way, IT departments are in many ways outsourcing their responsibilities – and we all remember when outsourcing was on everyone’s mind.
Office 365 seems like a very mature product and one whose time is about to come. You get more than just the familiar Office products like Word and Excel. You get SharePoint, Lync, and Exchange (and I’m talking about the Enterprise version of Office 365). Lync lets users chat to each other – a bit like using MSN used to. And SharePoint provides you with an intranet as well file and document management capabilities. You get Outlook, Publisher (my least-favourite piece of software), Access (the database), and InfoPath (used for electronic forms). You also get a nicely integrated Yammer – Microsoft’s Enterprise Social Networking (ESN) tool. There’s also PowerBI, a suite of business intelligence and self-serve data mining tools coming soon. This will integrate with Excel, so users can use the Power Query tool to create spreadsheets and graphs using public and private data, and also perform geovisualization with Bing Maps data using the Power Map tool.
And while the actual tools that are available on these different platforms and computing models, over time, are different, it’s the computing concepts that I’m suggesting come and go and come again, and go again! It’s like a battle between centralization and decentralization. Everyone likes to have that computing power on their phone or tablet, but whenever you need to do some real work, you connect (usually using a browser) to a distant computing monolith. So, plus ça change, plus c’est la même chose.
Labels:
blog,
Citrix,
client/server,
Eddolls,
GoToMeeting,
Linux,
mainframe,
Microsoft,
Office 365,
Outsourcing,
VMware,
Windows
Sunday, 15 June 2014
Having your cake and eating it
Everyone knows that mainframes are the best computers you can have. You can run them locally, you can hide them in the cloud, and you can link them together into a massive processing network. But we also know that there are smaller platforms out there that work differently. Wouldn’t it be brilliant if you could run them all from one place?
Last summer we were excited by the announcement from IBM of its new zBC12 mainframe computer. The zBC12 followed the previous year’s announcement of the zEC12 (Enterprise Class), and 2011 saw the z114, with 2010 giving us the z196. So what’s special about those mainframes?
Well, in addition to IFL, zIIP, and zAAP specialty processors, and massive amounts of processing power, they came with the IBM zEnterprise BladeCenter Extension (zBX), which lets users combine workloads designed for mainframes with those for POWER7 and x86 chips, like Microsoft Windows Server. So let’s unpick this a little.
One issue many companies have after years of mergers and acquisitions is a mixed bag of operating systems and platforms. They could well have server rooms across the world and not really know what was running on those servers.
IBM’s first solution to this problem was Linux on System z. Basically, a company could take hundreds of Linux servers and consolidate them onto a single mainframe. They would save on energy to drive the servers and cool them, and they would get control of their IT back.
IBM’s second solution, as we’ve just described, was to incorporate the hardware for Linux and Windows servers in its mainframe boxes. You’d just plug in the blades that you needed and you had full control over your Windows servers again (plus all the benefits of having a mainframe).
But what about if you could actually run Windows on your mainframe? That was the dream of some of the people at Mantissa Corporation. They did a technology demo at SHARE in August 2009. According to Mantissa’s Jim Porell, Network World read their abstract and incorrectly assumed that they were announcing a product – which they weren’t. The code is still in beta. But think about what it could mean: running all your Windows servers on a mainframe. That is quite a concept.
Again, according to Jim, they can now have real operating systems running under their z86VM, although, so far, they are the free versions of Linux. Their next step will be to test it with industrial strength Linux distros such as Red Hat and Suse. And then, they will need to get Windows running. And then they’ll have a product.
Speaking frankly, Jim said that currently they have a bug in their Windows ‘BIOS’ processing in the area of plug-and-play hardware support. Their thinking is that it’s a mistake in their interpretation of the hardware commands and, naturally, they’re working to resolve it.
The truth is that it’s still early days for the project, and while running Linux is pretty good, we can already do that on a mainframe (although you might quibble at the price tag for doing so). But once the Mantissa technical people have cracked the problems with Windows, it will be a product well worth taking a look at, I’m sure. But they’re not there yet, and they’re keen to manage expectations appropriately.
Jim goes on to say that once the problems are solved it’s going to be about performance. Performance is measured in a couple of ways: benchmarks of primitives, benchmarks of large-scale for capacity planning, end user experiences, and what they are willing to tolerate. So it seems that they have business objectives around performance where they could be successful if they supported only 10 PCs, then more successful with a 100 PCs, and even have even greater success if they can support a 1000 PCs.
Jim Porell describes z86VM as really just an enabler to solving a wide range of ‘customer problems’ by enabling direct access between the traditional mainframe and the PC operating systems that are co-located with it.
I think that anything that gets more operating systems running on mainframe hardware has got to be good. And I’m prepared to wait for however long it takes Mantissa to get Windows supported on a mainframe. I’ll definitely feel then that I’m having my cake and eating it!
Last summer we were excited by the announcement from IBM of its new zBC12 mainframe computer. The zBC12 followed the previous year’s announcement of the zEC12 (Enterprise Class), and 2011 saw the z114, with 2010 giving us the z196. So what’s special about those mainframes?
Well, in addition to IFL, zIIP, and zAAP specialty processors, and massive amounts of processing power, they came with the IBM zEnterprise BladeCenter Extension (zBX), which lets users combine workloads designed for mainframes with those for POWER7 and x86 chips, like Microsoft Windows Server. So let’s unpick this a little.
One issue many companies have after years of mergers and acquisitions is a mixed bag of operating systems and platforms. They could well have server rooms across the world and not really know what was running on those servers.
IBM’s first solution to this problem was Linux on System z. Basically, a company could take hundreds of Linux servers and consolidate them onto a single mainframe. They would save on energy to drive the servers and cool them, and they would get control of their IT back.
IBM’s second solution, as we’ve just described, was to incorporate the hardware for Linux and Windows servers in its mainframe boxes. You’d just plug in the blades that you needed and you had full control over your Windows servers again (plus all the benefits of having a mainframe).
But what about if you could actually run Windows on your mainframe? That was the dream of some of the people at Mantissa Corporation. They did a technology demo at SHARE in August 2009. According to Mantissa’s Jim Porell, Network World read their abstract and incorrectly assumed that they were announcing a product – which they weren’t. The code is still in beta. But think about what it could mean: running all your Windows servers on a mainframe. That is quite a concept.
Again, according to Jim, they can now have real operating systems running under their z86VM, although, so far, they are the free versions of Linux. Their next step will be to test it with industrial strength Linux distros such as Red Hat and Suse. And then, they will need to get Windows running. And then they’ll have a product.
Speaking frankly, Jim said that currently they have a bug in their Windows ‘BIOS’ processing in the area of plug-and-play hardware support. Their thinking is that it’s a mistake in their interpretation of the hardware commands and, naturally, they’re working to resolve it.
The truth is that it’s still early days for the project, and while running Linux is pretty good, we can already do that on a mainframe (although you might quibble at the price tag for doing so). But once the Mantissa technical people have cracked the problems with Windows, it will be a product well worth taking a look at, I’m sure. But they’re not there yet, and they’re keen to manage expectations appropriately.
Jim goes on to say that once the problems are solved it’s going to be about performance. Performance is measured in a couple of ways: benchmarks of primitives, benchmarks of large-scale for capacity planning, end user experiences, and what they are willing to tolerate. So it seems that they have business objectives around performance where they could be successful if they supported only 10 PCs, then more successful with a 100 PCs, and even have even greater success if they can support a 1000 PCs.
Jim Porell describes z86VM as really just an enabler to solving a wide range of ‘customer problems’ by enabling direct access between the traditional mainframe and the PC operating systems that are co-located with it.
I think that anything that gets more operating systems running on mainframe hardware has got to be good. And I’m prepared to wait for however long it takes Mantissa to get Windows supported on a mainframe. I’ll definitely feel then that I’m having my cake and eating it!
Sunday, 8 June 2014
Conversational Monitoring System
I do a lot of work with Web sites and people are often talking about using a CMS to make it easier to upload and place content. For them, CMS stands for Content Management System, but I thought it would be fun to revisit Conversational Monitor System – one of the first virtual machines that people really enjoyed using because of its flexibility and functionality.
Our story starts in the mid-1960s at IBM’s Cambridge Scientific Center, where CMS – then called Cambridge Monitor System – first saw the light of day running under CP-40, a VM-like control program, which then developed into CP-67. It provided a way of giving every user the appearance of working on their own computer system. So, every user had their own terminal (the system console for their system) as if they had their own processor, unit record device (what we’d call a printer, card reader, and card punch), and DASD (disk space). The control program looked after the real resources and allocated them, as required, to the users.
In 1972, when IBM released its 370-architecture machine, a new version of VM called VM/370 became available. Unlike CP/CMS, IBM supported VM/370 and CMS retained its initials, but was now known as Conversational Monitor System – highlighting its interactive nature. This commercial product also included RSCS (Remote Spooling Communication Subsystem) and IPCS (Interactive Problem Control System), which ran under CMS.
VM itself was originally not very popular within IBM, but, through quite an interesting story, survived. VM/370 became VM/SE and then VM/SP. There was also a low-end variant called VM/IS. Then there was VM/SP HPO before we had VM/XA SF, VM/XA SP, then VM/ESA, and now z/VM.
But returning to CMS, it was popular because you could do so much with it. You could develop, debug, and run programs, manage data files, communicate with other systems or users, and much more. When you start (IPL CMS) CMS, it loads a profile exec, which sets up your virtual environment in exactly the way that you want it to be.
Two products that made CMS users very happy were PROFS and REXX. PROFS (PRofessional OFfice System) became available in 1981 and was originally developed by IBM in Dallas, in conjunction with Amoco. It provided e-mail, shared calendars, and shared document storage and management. It was so popular that it was renamed OfficeVision and ported to other platforms. OfficeVision/VM was dropped in 2003, with IBM recommending that users migrate to Lotus Notes and Domino, which it had acquired by taking over Lotus.
REXX (Restructured Extended Executor) is an interpreted programming language developed by Mike Cowlishaw and released by IBM in 1982. It is a structured, high-level programming language that's very easy to get the hang of. In fact, it was so popular, that it was ported to most other platforms. REXX programs are often called REXX EXECs because it replaced EXEC and EXEC2 as the command language of choice.
CMS users will remember XEDIT, which is an edit program written by Xavier de Lamberterie that was released in 1980. XEDIT supports automatic line numbers, and many of the commands operate on blocks of lines. The command line allows user to type editor commands. It replaced EDIT SP as the standard editor. Again, XEDIT was very popular and ported to other platforms.
CMS provided a way to maximize the number of people who could concurrently use mainframe facilities at a time when these facilities were fairly restricted. It was a hugely successful environment, spawning tools that themselves were ported to other platforms because they were so successful. I used CMS and VM a lot back in the day, and even wrote two books about VM. Like many users, I have very fond memories of using CMS and what could be achieved by using CMS.
Our story starts in the mid-1960s at IBM’s Cambridge Scientific Center, where CMS – then called Cambridge Monitor System – first saw the light of day running under CP-40, a VM-like control program, which then developed into CP-67. It provided a way of giving every user the appearance of working on their own computer system. So, every user had their own terminal (the system console for their system) as if they had their own processor, unit record device (what we’d call a printer, card reader, and card punch), and DASD (disk space). The control program looked after the real resources and allocated them, as required, to the users.
In 1972, when IBM released its 370-architecture machine, a new version of VM called VM/370 became available. Unlike CP/CMS, IBM supported VM/370 and CMS retained its initials, but was now known as Conversational Monitor System – highlighting its interactive nature. This commercial product also included RSCS (Remote Spooling Communication Subsystem) and IPCS (Interactive Problem Control System), which ran under CMS.
VM itself was originally not very popular within IBM, but, through quite an interesting story, survived. VM/370 became VM/SE and then VM/SP. There was also a low-end variant called VM/IS. Then there was VM/SP HPO before we had VM/XA SF, VM/XA SP, then VM/ESA, and now z/VM.
But returning to CMS, it was popular because you could do so much with it. You could develop, debug, and run programs, manage data files, communicate with other systems or users, and much more. When you start (IPL CMS) CMS, it loads a profile exec, which sets up your virtual environment in exactly the way that you want it to be.
Two products that made CMS users very happy were PROFS and REXX. PROFS (PRofessional OFfice System) became available in 1981 and was originally developed by IBM in Dallas, in conjunction with Amoco. It provided e-mail, shared calendars, and shared document storage and management. It was so popular that it was renamed OfficeVision and ported to other platforms. OfficeVision/VM was dropped in 2003, with IBM recommending that users migrate to Lotus Notes and Domino, which it had acquired by taking over Lotus.
REXX (Restructured Extended Executor) is an interpreted programming language developed by Mike Cowlishaw and released by IBM in 1982. It is a structured, high-level programming language that's very easy to get the hang of. In fact, it was so popular, that it was ported to most other platforms. REXX programs are often called REXX EXECs because it replaced EXEC and EXEC2 as the command language of choice.
CMS users will remember XEDIT, which is an edit program written by Xavier de Lamberterie that was released in 1980. XEDIT supports automatic line numbers, and many of the commands operate on blocks of lines. The command line allows user to type editor commands. It replaced EDIT SP as the standard editor. Again, XEDIT was very popular and ported to other platforms.
CMS provided a way to maximize the number of people who could concurrently use mainframe facilities at a time when these facilities were fairly restricted. It was a hugely successful environment, spawning tools that themselves were ported to other platforms because they were so successful. I used CMS and VM a lot back in the day, and even wrote two books about VM. Like many users, I have very fond memories of using CMS and what could be achieved by using CMS.
Sunday, 25 May 2014
Busy week for IBM
IBM has had a busy week, with announcements about cloud, tape storage, and even giving Watson a personality! So let’s have a look at what’s being said.
An IBM survey (produced by the Institute for Business Value) found that global organizations are unprepared for things like cloud computing, analytics, mobile devices, and social media. And guess what? IBM has some new systems, software, and capabilities to help those organizations create smarter infrastructures that will give them faster access to big data insights through the cloud and improved business performance.
IBM recently announced software enabling organizations to access any data from any device and from anywhere in the world, and has added to that with storage announcements. Its Storwize, XIV, tape library, and Flash storage products can optimize storage for large-scale cloud deployments through virtualization, real-time compression, easy-tiering, and mirroring, and provide fast access to information.
IBM’s Storwize V7000 Unified has been enhanced with new clustering capabilities, real-time compression, and Active Cloud Engine to help customers manage growing amounts of data. The system also supports 4 petabytes (twice the storage capacity of previous models).
IBM’s XIV Cloud Storage for Service Providers provides a pay-per-use pricing model for business partners that reduces the initial cost of the system. New features demoed included XIV multi-tenancy, enhanced data security, and improved cloud economics through the partition of XIV storage into logical domains assigned to distinct tenants.
The TS4500 Tape Library enables large-scale cloud deployments with a data architecture that maintains high utilization and can back up three times more cloud data in the same footprint. And the IBM DS8870 Flash enclosure provides up to three and one-half times faster flash performance requiring 50 percent less space and 12 percent less energy.
IBM has launched IBM Cloudmanager with Openstack that can be downloaded from its Marketplace like any other application. It’s based on IBM Cloudentry, and includes full access to Icehouse, the latest version of Openstack. It can also be bought as part of a package with the IBM Power Systems server range to form the IBM Power Systems Solution Edition for Scale Out Cloud.
IBM also has IBM Flex System X6 compute nodes and IBM Flex System x880 X6 eight-socket, x480 X6 four-socket, and x280 X6 two-socket compute nodes. These nodes include modular blade design that enables seamless scalability without ‘rip and replace’ as analytic workloads increase.
The IBM System x3100 M5 is a new, compact, tower server equipped with the latest Intel Xeon E3-1200v3 processors for increased performance, and four levels of RAID for enhanced data protection.
The IBM PureFlex Solution for Parallels allows managed service providers to integrate Web, IaaS, SaaS, and core services for their clients on the Parallels Automation software platform.
IBM also announced cloud-based offerings (Software as a Service) for IBM Concert, IBM Project Catalyst, and OpenPages. IBM Concert provides a way to make budgeting, planning, and forecasting software more accessible to sales, marketing, and incentive-compensation decision makers. Project Catalyst provides advanced analytics capabilities to everyone. OpenPages provides governance, risk-management, and compliance application as a managed service on SoftLayer
Meanwhile, IBM and Fujifilm have demoed a 154TB LTO-size tape cartridge. The density developments rely on smaller magnetic particles, narrower tracks, meaning more tracks on a half-inch-wide tape, and better read:write heads to follow and read/write to/from these tracks.
And finally, in a busy week of announcements, IBM’s Watson group has acquired Cognea, an Australian-founded start-up that makes virtual assistants for enterprise customers. They say that the Watson Platform will fit into the “cognitive era of computing” where people have conversations with machines that are able to understand natural language. IBM has brought in Cognea to offer a range of personalities for Watson, “from suit-and-tie formal to kid-next-door friendly”. The acquisition furthers IBM’s plan to make Watson a development platform available to the enterprise, start-ups, and universities.
An IBM survey (produced by the Institute for Business Value) found that global organizations are unprepared for things like cloud computing, analytics, mobile devices, and social media. And guess what? IBM has some new systems, software, and capabilities to help those organizations create smarter infrastructures that will give them faster access to big data insights through the cloud and improved business performance.
IBM recently announced software enabling organizations to access any data from any device and from anywhere in the world, and has added to that with storage announcements. Its Storwize, XIV, tape library, and Flash storage products can optimize storage for large-scale cloud deployments through virtualization, real-time compression, easy-tiering, and mirroring, and provide fast access to information.
IBM’s Storwize V7000 Unified has been enhanced with new clustering capabilities, real-time compression, and Active Cloud Engine to help customers manage growing amounts of data. The system also supports 4 petabytes (twice the storage capacity of previous models).
IBM’s XIV Cloud Storage for Service Providers provides a pay-per-use pricing model for business partners that reduces the initial cost of the system. New features demoed included XIV multi-tenancy, enhanced data security, and improved cloud economics through the partition of XIV storage into logical domains assigned to distinct tenants.
The TS4500 Tape Library enables large-scale cloud deployments with a data architecture that maintains high utilization and can back up three times more cloud data in the same footprint. And the IBM DS8870 Flash enclosure provides up to three and one-half times faster flash performance requiring 50 percent less space and 12 percent less energy.
IBM has launched IBM Cloudmanager with Openstack that can be downloaded from its Marketplace like any other application. It’s based on IBM Cloudentry, and includes full access to Icehouse, the latest version of Openstack. It can also be bought as part of a package with the IBM Power Systems server range to form the IBM Power Systems Solution Edition for Scale Out Cloud.
IBM also has IBM Flex System X6 compute nodes and IBM Flex System x880 X6 eight-socket, x480 X6 four-socket, and x280 X6 two-socket compute nodes. These nodes include modular blade design that enables seamless scalability without ‘rip and replace’ as analytic workloads increase.
The IBM System x3100 M5 is a new, compact, tower server equipped with the latest Intel Xeon E3-1200v3 processors for increased performance, and four levels of RAID for enhanced data protection.
The IBM PureFlex Solution for Parallels allows managed service providers to integrate Web, IaaS, SaaS, and core services for their clients on the Parallels Automation software platform.
IBM also announced cloud-based offerings (Software as a Service) for IBM Concert, IBM Project Catalyst, and OpenPages. IBM Concert provides a way to make budgeting, planning, and forecasting software more accessible to sales, marketing, and incentive-compensation decision makers. Project Catalyst provides advanced analytics capabilities to everyone. OpenPages provides governance, risk-management, and compliance application as a managed service on SoftLayer
Meanwhile, IBM and Fujifilm have demoed a 154TB LTO-size tape cartridge. The density developments rely on smaller magnetic particles, narrower tracks, meaning more tracks on a half-inch-wide tape, and better read:write heads to follow and read/write to/from these tracks.
And finally, in a busy week of announcements, IBM’s Watson group has acquired Cognea, an Australian-founded start-up that makes virtual assistants for enterprise customers. They say that the Watson Platform will fit into the “cognitive era of computing” where people have conversations with machines that are able to understand natural language. IBM has brought in Cognea to offer a range of personalities for Watson, “from suit-and-tie formal to kid-next-door friendly”. The acquisition furthers IBM’s plan to make Watson a development platform available to the enterprise, start-ups, and universities.
Sunday, 18 May 2014
Enterprise Social Networking
Enterprise Social Networks are identifiable by the fact that they integrate with existing platforms and applications and they appeal directly to end users. In effect, they bring the benefits of social media to the enterprise. So how would you recognize an Enterprise Social Networking (ESN) product? It would be something like Microsoft’s Yammer, Jive Software’s Jive, Salesforce’s Chatter, and IBM’s Connections. In fact, those are Gartner’s four leading products in the sector (Gartner Inc “Magic Quadrant for Social Software in the Workplace” by Nikos Drakos et al, September 2013).
Drakos and colleagues estimated the market will be worth $1.4 billion in revenue by 2016, and described it as “dynamic and highly competitive”. Gartner looked at the top 20 vendors in the sector. Apart from the top four mentioned above, Gartner’s Visionaries were: Google, Telligent (Zimbra), SAP, Cisco, and Acquia. The Niche players were: OpenText, Huddle, blueKiwi, Igloo, Novell, Liferay, and Zyncro. And the Challengers were: Tibco Software, VMware, NewsGator, and Atlassian.
According to Gartner, “Leaders are well-established vendors with widely used social software and collaboration offerings. They have established their leadership through early recognition of users’ needs, continuous innovation, overall market presence, and success in delivering user-friendly and solution-focused suites with broad capabilities”.
If you’re thinking of getting an Enterprise Social Network, what are you going to use it for? “S.O.C.I.A.L. – Emergent Enterprise Social Networking Use Cases: A Multi Case Study Comparison”, by Kai Riemer and Alexander Richter (2012), analysed nearly 7500 messages from across five mature networks and found that virtually all the messages could be grouped into one of eleven generic categories. They were:
Gartner has suggested that the business objectives of Enterprise Social Networking projects are to:
So let’s take a brief look at those market leaders.
Yammer was launched in 2008 and was bought by Microsoft in 2012. The plans seem to be that it will tightly integrate with Office, SharePoint, and application programs running on Windows. Yammer lives in the cloud and looks pretty much like Facebook, so users can find their way round it fairly easily. With Office 365, sites can run their Microsoft infrastructure in the cloud too.
Jive Software’s Jive was previously known as Clearspace, then Jive SBS, then Jive Engage. Jive (the company) was founded in 2001. Salesforce.com was founded in 1999, and provides a variety of different services to its customer base. Both offer a Facebook-like product.
IBM’s product is IBM Connections. It was announced at Lotusphere in 2007, and is currently at Version 4.5. Its components include a homepage, microblogging, profiles, communities, ideation (the ability to crowdsource ideas), media gallery, blogs, bookmarks, activities (a tool for groups of people to work together on a specific project or task), files, wikis, forums, and a search facility.
The ten IBM Connections components are J2EE (Java 2 Platform, Enterprise Edition) applications that are hosted on IBM WebSphere Application Server. In this way, the components can be hosted independently of each other, and large-scale deployments can be supported.
Importantly, if this is going to get any take up outside of IBM-controlled environments, IBM Connections uses plug-ins to integrate into existing applications, including:
There’s also platform support for IBM WebSphere Application Server V8 and DB2 10, as well as support for the IBM i operating system.
It’s likely that once people start to use them, Enterprise Social Networks will take on a life of their own and new uses will be found for them. My feeling is that their use will continue to grow and you’ll begin to find them embedded in every organization that you visit – and you’ll find people checking them on their smartphones and tablets when they’re out and about.
Drakos and colleagues estimated the market will be worth $1.4 billion in revenue by 2016, and described it as “dynamic and highly competitive”. Gartner looked at the top 20 vendors in the sector. Apart from the top four mentioned above, Gartner’s Visionaries were: Google, Telligent (Zimbra), SAP, Cisco, and Acquia. The Niche players were: OpenText, Huddle, blueKiwi, Igloo, Novell, Liferay, and Zyncro. And the Challengers were: Tibco Software, VMware, NewsGator, and Atlassian.
According to Gartner, “Leaders are well-established vendors with widely used social software and collaboration offerings. They have established their leadership through early recognition of users’ needs, continuous innovation, overall market presence, and success in delivering user-friendly and solution-focused suites with broad capabilities”.
If you’re thinking of getting an Enterprise Social Network, what are you going to use it for? “S.O.C.I.A.L. – Emergent Enterprise Social Networking Use Cases: A Multi Case Study Comparison”, by Kai Riemer and Alexander Richter (2012), analysed nearly 7500 messages from across five mature networks and found that virtually all the messages could be grouped into one of eleven generic categories. They were:
- Problem solving – what can I do with x that I can’t do with y?
- Idea generation – how we can make this group more useful to its members?
- Status updates – I’m in my weekly meeting with customers
- Work coordination – @bob Can you raise tom’s permissions
- Information storage – checklist for H&S
- Discussion and opinions
- Input generation – #NHF recommends
- Meeting organization – I can’t make that time, can we shift to 4pm?
- Event notifications – 19 May for leaving drinks.
- Social praise – thanks for all your hard work on cut-over day.
- Informal talk – congratulations on your new baby.
Gartner has suggested that the business objectives of Enterprise Social Networking projects are to:
- Improve general communication and information sharing
- Boost team productivity and effectiveness with projects and business processes
- Support communities that stimulate learning and innovation, diffuse best practices, and encourage peer-to-peer networking that strengthens professional and interpersonal relationships.
So let’s take a brief look at those market leaders.
Yammer was launched in 2008 and was bought by Microsoft in 2012. The plans seem to be that it will tightly integrate with Office, SharePoint, and application programs running on Windows. Yammer lives in the cloud and looks pretty much like Facebook, so users can find their way round it fairly easily. With Office 365, sites can run their Microsoft infrastructure in the cloud too.
Jive Software’s Jive was previously known as Clearspace, then Jive SBS, then Jive Engage. Jive (the company) was founded in 2001. Salesforce.com was founded in 1999, and provides a variety of different services to its customer base. Both offer a Facebook-like product.
IBM’s product is IBM Connections. It was announced at Lotusphere in 2007, and is currently at Version 4.5. Its components include a homepage, microblogging, profiles, communities, ideation (the ability to crowdsource ideas), media gallery, blogs, bookmarks, activities (a tool for groups of people to work together on a specific project or task), files, wikis, forums, and a search facility.
The ten IBM Connections components are J2EE (Java 2 Platform, Enterprise Edition) applications that are hosted on IBM WebSphere Application Server. In this way, the components can be hosted independently of each other, and large-scale deployments can be supported.
Importantly, if this is going to get any take up outside of IBM-controlled environments, IBM Connections uses plug-ins to integrate into existing applications, including:
- IBM Notes
- IBM Sametime
- Microsoft Office
- Microsoft Outlook
- Microsoft Windows Explorer
- Microsoft Sharepoint
- RIM BlackBerry
- Apple iPhone / iPad / iPod Touch
- Google Android Phones
- WebSphere Portal.
There’s also platform support for IBM WebSphere Application Server V8 and DB2 10, as well as support for the IBM i operating system.
It’s likely that once people start to use them, Enterprise Social Networks will take on a life of their own and new uses will be found for them. My feeling is that their use will continue to grow and you’ll begin to find them embedded in every organization that you visit – and you’ll find people checking them on their smartphones and tablets when they’re out and about.
Labels:
Alexander Richter,
blog,
chatter,
Connections,
Eddolls,
enterprise,
ESN,
Gartner,
IBM,
Jive,
Jive Software,
Kai Riemer,
Microsoft,
Networking,
networks,
Nikos Drakos,
Salesforce,
social,
Yammer
Sunday, 11 May 2014
Trevor Eddolls - IBM Champion 2014
iTech-Ed Ltd, the mainframe specialist organization that provides IT consultancy, analysis, technical education and training, Web design, writing, and editing solutions, is pleased to announce that Trevor Eddolls, its CEO, has been recognized by IBM as an IBM Champion for Information Management for the sixth year running. Trevor was first made an IBM Champion in 2009.
Trevor Eddolls, CEO of iTech-Ed Ltd said: “I am really proud to be recognized for this award again this year. There may not be a financial benefit to being an IBM Champion, but it’s a positive way for IBM to recognize people around the world who are helping to promote IBM’s products and help share information about those products amongst their users”.
But what does it mean? According to IBM: “IBM Champions encompass educators, programmers, developers and other IT professionals across a spectrum of technology categories, including big data, business analytics, information management, storage and more. These individuals serve as advocates and mentors for those availing themselves of IBM solutions and services.”
Contributions can come in a variety of forms, and popular contributions include blogging, speaking at conferences or events, moderating forums, leading user groups, and authoring books or magazines. Educators can also become IBM Champions; for example, an academic faculty may become IBM Champions by including IBM products and technologies in course curricula and encouraging students to build skills and expertise in these areas.
An IBM Champion is not an IBMer, and can live in any country. IBM Champions share their accomplishments and activities in their public profiles on IBM developerWorks, making it easy for the IT professional community to learn more about them and their contributions, and engage with them.”
So why is iTech-Ed Ltd’s Trevor Eddolls an IBM Champion? Well, he doesn’t work for IBM, but he does write about mainframe hardware and software. You can read his blog at mainframeupdate.blogspot.com and it.toolbox.com/blogs/mainframe-world. He also blogs once a month on the Destination z Web site. He’s Editorial Director for the well-respected Arcati Mainframe Yearbook. He’s also written technical articles that have been published in a variety of journals including Enterprise Tech Journal and its predecessor, z/Journal. And Trevor Eddolls is the chair of the Virtual IMS user group and the Virtual CICS user group. He also looks after their social networking – you can find information about the groups on Twitter, Facebook, and LinkedIn.
IBM Champions receive the title for one year, during which they can enjoy the benefits associated with the program – rather than any direct payment from IBM. Existing Champions are eligible to renew their status for the following year, as long as they can demonstrate that they have made significant contributions to the community over the previous 12 months.
Are IBM Champions compensated for their role? No. Do IBM Champions have any obligations to IBM? Again the answer is no. The title recognizes their past contributions to the community only over the previous 12 months. Do IBM Champions have any formal relationship with IBM? No. IBM Champions don’t formally represent IBM nor do they speak on behalf of IBM.
But it’s not all one-sided! IBM Champions receive merchandise customized with the IBM Champion logo. And IBM Champions receive: visibility, recognition, and networking opportunities at IBM events and conferences; and special access to product development teams, and invitations and discounts to events and conferences.
You can see Trevor’s profile here.
Trevor Eddolls, CEO of iTech-Ed Ltd said: “I am really proud to be recognized for this award again this year. There may not be a financial benefit to being an IBM Champion, but it’s a positive way for IBM to recognize people around the world who are helping to promote IBM’s products and help share information about those products amongst their users”.
But what does it mean? According to IBM: “IBM Champions encompass educators, programmers, developers and other IT professionals across a spectrum of technology categories, including big data, business analytics, information management, storage and more. These individuals serve as advocates and mentors for those availing themselves of IBM solutions and services.”
Contributions can come in a variety of forms, and popular contributions include blogging, speaking at conferences or events, moderating forums, leading user groups, and authoring books or magazines. Educators can also become IBM Champions; for example, an academic faculty may become IBM Champions by including IBM products and technologies in course curricula and encouraging students to build skills and expertise in these areas.
An IBM Champion is not an IBMer, and can live in any country. IBM Champions share their accomplishments and activities in their public profiles on IBM developerWorks, making it easy for the IT professional community to learn more about them and their contributions, and engage with them.”
So why is iTech-Ed Ltd’s Trevor Eddolls an IBM Champion? Well, he doesn’t work for IBM, but he does write about mainframe hardware and software. You can read his blog at mainframeupdate.blogspot.com and it.toolbox.com/blogs/mainframe-world. He also blogs once a month on the Destination z Web site. He’s Editorial Director for the well-respected Arcati Mainframe Yearbook. He’s also written technical articles that have been published in a variety of journals including Enterprise Tech Journal and its predecessor, z/Journal. And Trevor Eddolls is the chair of the Virtual IMS user group and the Virtual CICS user group. He also looks after their social networking – you can find information about the groups on Twitter, Facebook, and LinkedIn.
IBM Champions receive the title for one year, during which they can enjoy the benefits associated with the program – rather than any direct payment from IBM. Existing Champions are eligible to renew their status for the following year, as long as they can demonstrate that they have made significant contributions to the community over the previous 12 months.
Are IBM Champions compensated for their role? No. Do IBM Champions have any obligations to IBM? Again the answer is no. The title recognizes their past contributions to the community only over the previous 12 months. Do IBM Champions have any formal relationship with IBM? No. IBM Champions don’t formally represent IBM nor do they speak on behalf of IBM.
But it’s not all one-sided! IBM Champions receive merchandise customized with the IBM Champion logo. And IBM Champions receive: visibility, recognition, and networking opportunities at IBM events and conferences; and special access to product development teams, and invitations and discounts to events and conferences.
You can see Trevor’s profile here.
Sunday, 4 May 2014
Keeping down costs and getting ahead of the competition
We all know that everyone should be using mainframes for all their computing needs – and we also know that most people have got a laptop for their smaller computing needs on which they use Microsoft Office. A recent survey has published interesting results showing how those users could save money.
The company is called SoftWatch (www.softwatch.com), and their benchmark study looked at options for moving from on-premises applications to cloud-based solutions. They compared the cost of using MS Office with using Google Apps. Their study included over 150,000 users. What they found was that, on average, an employee spends only 48 minutes a day on MS Office applications, most of it on Outlook for e-mail. The study also revealed high numbers of inactive users in the organizations, and found that PowerPoint was not being used at all by half of the employees. In addition, most of the users of the other applications used them primarily for viewing and light editing purposes, with only a small number of heavy users: 2 percent in PowerPoint, 9 percent in Word, and 19 percent in Excel. According to SoftWatch, by moving light users from MS Office to Google Apps, organizations could save up to 90% on their Microsoft licensing fees.
By using SoftWatch’s software, companies can actually measure to what extent employees are using MS Office applications. The data is displayed on a dashboard, making it easy to determine what an enterprise really needs in terms of Microsoft licences, how to effectively transition to Google Apps, and how much money can be saved.
Also this week, Adaptive Computing (www.adaptivecomputing.com/) was thinking about Big Data, and more especially data paring. They were saying that data is growing exponentially, and growing your computer data exponentially will require budgets that aren’t realistic. As the amount of data increases exponentially, the amount of interesting data doesn’t, so somehow the ‘noise’ needs to be ignored.
They suggest that paring and sifting algorithms are the way forward, and that they will only grow in significance over time. They add that data capturing will always be fundamentally faster and easier than data analysis, and data will continue to multiply exponentially. Not wasting time on irrelevant data will be one of the keys to staying ahead of the competition.
They conclude that the scientific community has been determining how to remove irrelevant data for a long time, so long that the term outlier is mainstream. As Big Data moves to the forefront, organizations that can adapt techniques to ignore outliers and draw intelligent conclusions based on higher-correlated data are going to lead the way.
With the growth in the Internet of Things (IOT) and the ability to store more-and-more data, I can only agree with them that sorting the chaff from the wheat is going to be what separates successful organizations from the rest.
So there you are. I’ve given you a suggestion of how to cut IT costs, and how to keep your organization ahead of the opposition. Surely this must result in some money coming the way of the mainframe team for their IT needs – as your company looks to the future!
The company is called SoftWatch (www.softwatch.com), and their benchmark study looked at options for moving from on-premises applications to cloud-based solutions. They compared the cost of using MS Office with using Google Apps. Their study included over 150,000 users. What they found was that, on average, an employee spends only 48 minutes a day on MS Office applications, most of it on Outlook for e-mail. The study also revealed high numbers of inactive users in the organizations, and found that PowerPoint was not being used at all by half of the employees. In addition, most of the users of the other applications used them primarily for viewing and light editing purposes, with only a small number of heavy users: 2 percent in PowerPoint, 9 percent in Word, and 19 percent in Excel. According to SoftWatch, by moving light users from MS Office to Google Apps, organizations could save up to 90% on their Microsoft licensing fees.
By using SoftWatch’s software, companies can actually measure to what extent employees are using MS Office applications. The data is displayed on a dashboard, making it easy to determine what an enterprise really needs in terms of Microsoft licences, how to effectively transition to Google Apps, and how much money can be saved.
Also this week, Adaptive Computing (www.adaptivecomputing.com/) was thinking about Big Data, and more especially data paring. They were saying that data is growing exponentially, and growing your computer data exponentially will require budgets that aren’t realistic. As the amount of data increases exponentially, the amount of interesting data doesn’t, so somehow the ‘noise’ needs to be ignored.
They suggest that paring and sifting algorithms are the way forward, and that they will only grow in significance over time. They add that data capturing will always be fundamentally faster and easier than data analysis, and data will continue to multiply exponentially. Not wasting time on irrelevant data will be one of the keys to staying ahead of the competition.
They conclude that the scientific community has been determining how to remove irrelevant data for a long time, so long that the term outlier is mainstream. As Big Data moves to the forefront, organizations that can adapt techniques to ignore outliers and draw intelligent conclusions based on higher-correlated data are going to lead the way.
With the growth in the Internet of Things (IOT) and the ability to store more-and-more data, I can only agree with them that sorting the chaff from the wheat is going to be what separates successful organizations from the rest.
So there you are. I’ve given you a suggestion of how to cut IT costs, and how to keep your organization ahead of the opposition. Surely this must result in some money coming the way of the mainframe team for their IT needs – as your company looks to the future!
Labels:
Adaptive Computing,
algorithms,
big data,
blog,
data paring,
Eddolls,
Google Apps,
MS Office,
sifting,
SoftWatch
Sunday, 27 April 2014
Tell me about NoSQL
NoSQL seems to be the buzzword of choice at the moment for people who want the flexibility to build and frequently alter their databases. But there are plenty of people who still aren’t quite sure what a NoSQL database is and why they should want to use it. So let’s take a brief overview of NoSQL.
The term, NoSQL, first saw the light of day in 1998 when Carlo Strozzi used it as the name of his lightweight, open-source, relational database because it didn’t expose the standard SQL interface. But the term gained its modern usage in 2009 when it was used as a generic label for non-relational, distributed, data stores. So, it refers to a whole family of databases, rather than a single type of database.
Developers like NoSQL because they can store and retrieve data without being locked into the tabular relationships used in relational databases. It makes scaling easier and they provide superior performance. They can store large volumes of structured, semi-structured, and unstructured data. They can handle agile sprints, quick iteration, and frequent code pushes. They use object-oriented programming that is easy to use and flexible. And they use efficient scale-out architecture instead of expensive monolithic architecture.
But, on the down side, NoSQL lacks ACID (Atomicity, Consistency, Isolation, Durability) transaction support. Atomicity means that each transaction is ‘all or nothing’, ie if one part of the transaction fails, the entire transaction fails, and the database state is left unchanged. Consistency ensures that any transaction brings the database from one valid state to another. Isolation means that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed sequentially. Durability means that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. And that’s the kind of reliability you want in a business-critical database.
NoSQL databases are typically used in Big Data and real-time Web applications. The different NoSQL database technologies were developed because of the increase in the volume of data that people needed to store, the frequency the data is accessed, and increased performance and processing needs.
There are estimated to be over 150 open source databases available. And there are many different types of NoSQL database, including some that allow the use of SQL-like languages – these are sometimes referred to as ‘Not only SQL’ databases. Classify NoSQL databases is quite a challenge, but they can be grouped, by the features they offer, into column, document, key-value, and graph types. Alternatively, they can be classified by data model into KV Cache, KV Store, KV Store - Eventually consistent, Data-structures server, KV Store – Ordered, Tuple Store, Object Database, Document Store, and Wide Columnar Store.
The good news for DB2 users is that IBM has provided a new API that supports multiple calls and a NoSQL software solution stack that ships with DB2. It’s free with DB2 on distributed platforms and with DB2 Connect. DB2 also offers a second type of NoSQL-like database – the XML data store. This can store the growing volume of Web-based data.
Rocket Software has a way of using MongoDB (an example of a NoSQL database) on a mainframe. Rocket can provide access to any System z database using any MongoDB client driver. DB2 supports MongoDB.
IBM recently announced Zdoop, Hadoop database software for Linux from Veristorm on System z mainframes, stating: “This will help clients to avoid staging and offloading mainframe data to maintain existing security and governance controls”.
Other NoSQL databases that you might want to look out for include Cassandra, CouchBase, Redis, and Riak.
Clearly, with the growth in Big Data, we’ll be hearing a lot more about NoSQL databases and how they can be integrated into mainframe technology. There are lots of them out there and they can be quite different from each other in terms of their features and data models used.
The term, NoSQL, first saw the light of day in 1998 when Carlo Strozzi used it as the name of his lightweight, open-source, relational database because it didn’t expose the standard SQL interface. But the term gained its modern usage in 2009 when it was used as a generic label for non-relational, distributed, data stores. So, it refers to a whole family of databases, rather than a single type of database.
Developers like NoSQL because they can store and retrieve data without being locked into the tabular relationships used in relational databases. It makes scaling easier and they provide superior performance. They can store large volumes of structured, semi-structured, and unstructured data. They can handle agile sprints, quick iteration, and frequent code pushes. They use object-oriented programming that is easy to use and flexible. And they use efficient scale-out architecture instead of expensive monolithic architecture.
But, on the down side, NoSQL lacks ACID (Atomicity, Consistency, Isolation, Durability) transaction support. Atomicity means that each transaction is ‘all or nothing’, ie if one part of the transaction fails, the entire transaction fails, and the database state is left unchanged. Consistency ensures that any transaction brings the database from one valid state to another. Isolation means that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed sequentially. Durability means that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. And that’s the kind of reliability you want in a business-critical database.
NoSQL databases are typically used in Big Data and real-time Web applications. The different NoSQL database technologies were developed because of the increase in the volume of data that people needed to store, the frequency the data is accessed, and increased performance and processing needs.
There are estimated to be over 150 open source databases available. And there are many different types of NoSQL database, including some that allow the use of SQL-like languages – these are sometimes referred to as ‘Not only SQL’ databases. Classify NoSQL databases is quite a challenge, but they can be grouped, by the features they offer, into column, document, key-value, and graph types. Alternatively, they can be classified by data model into KV Cache, KV Store, KV Store - Eventually consistent, Data-structures server, KV Store – Ordered, Tuple Store, Object Database, Document Store, and Wide Columnar Store.
The good news for DB2 users is that IBM has provided a new API that supports multiple calls and a NoSQL software solution stack that ships with DB2. It’s free with DB2 on distributed platforms and with DB2 Connect. DB2 also offers a second type of NoSQL-like database – the XML data store. This can store the growing volume of Web-based data.
Rocket Software has a way of using MongoDB (an example of a NoSQL database) on a mainframe. Rocket can provide access to any System z database using any MongoDB client driver. DB2 supports MongoDB.
IBM recently announced Zdoop, Hadoop database software for Linux from Veristorm on System z mainframes, stating: “This will help clients to avoid staging and offloading mainframe data to maintain existing security and governance controls”.
Other NoSQL databases that you might want to look out for include Cassandra, CouchBase, Redis, and Riak.
Clearly, with the growth in Big Data, we’ll be hearing a lot more about NoSQL databases and how they can be integrated into mainframe technology. There are lots of them out there and they can be quite different from each other in terms of their features and data models used.
Sunday, 13 April 2014
Name the 12 Apostles - a quiz
Because we’re coming up to Easter, I thought I’d do something different this week!
Can you name the 12 Apostles?
Well, obviously, there’s Matthew, Mark, Luke, and John, who have Gospels named after them – that’s four. Whoops, sorry, you can’t include Mark and Luke – they weren’t Apostles.
Well, from the Easter story we’re all familiar with Peter and Judas Iscariot. So that’s four we can name, we’re a third of the way there.
So who does that leave, mmmh!
There was Simon. Oh no not him because he’s already in our list – he’s also called Peter. So still eight to go.
There was Doubting Thomas – that’s five.
But who were the others. Well, there’s James who was John’s brother, and there was Andrew who was Peter’s brother – that’s seven. Just five to go.
So who are those last five? Bonus marks for anyone who remembers Bartholomew, James (the less), Philip, Simon the Canaanite, and Thaddeus.
So hang on, what about all those people who keep saying Jude – the patron saint of desperate causes. Yes he was an Apostle, but, apparently, he is also known as Thaddeus – mainly because early writers didn’t want him being confused with Judas Iscariot (hence the alternative name).
Which brings me nicely back to dealing with Judas Iscariot. Obviously he was an Apostle – one of the 12 – but you can’t imagine early Christians being too keen on including him in a list of revered Apostles! So who became the new twelfth man (if you’ll pardon a sort of cricketing metaphor)? In Acts it is said that following the death of Judas a new twelfth Apostle was appointed. He was called Matthias. He usually appears in lists of the twelve Apostles rather than the disgraced Judas Iscariot. However, some argue that only people actually sent out by Jesus could truly be called Apostles.
Troublingly, The Gospel of John also mentions an Apostle called Nathaniel (see John 1:45-51 and 21:2). Most authorities assume this is an alternative name for Bartholomew – so don’t worry about that one.
St Barnabas (who, confusingly, was originally called Joseph) is referred to as an Apostle in Acts 14:14. His claim to fame was introducing Paul (of road to Damascus fame) to the disciples in Jerusalem. And that’s the lot!
And for lots of bonus marks, can you name the thirteenth Apostle? Of course, there is more than one answer to this question.
Paul (of Tarsus) described himself as an Apostle following his Damascene conversion.
The Roman Emperor Constantine was responsible for making Christianity the official religion in Rome in the 4th century. He is often referred to as the thirteenth Apostle.
Plus, there’s also a long list of people who have brought Christianity to a some particular part of the world, who are referred to as the Apostle of somewhere or Apostle to somewhere else (for example St Augustine, the Apostle to England, or St Patrick, the Apostle to Ireland).
So, a much harder quiz than you might have thought. How many did you know?
If you celebrate it, have a good Easter next week. It’s back to mainframes in two weeks’ time.
Can you name the 12 Apostles?
Well, obviously, there’s Matthew, Mark, Luke, and John, who have Gospels named after them – that’s four. Whoops, sorry, you can’t include Mark and Luke – they weren’t Apostles.
Well, from the Easter story we’re all familiar with Peter and Judas Iscariot. So that’s four we can name, we’re a third of the way there.
So who does that leave, mmmh!
There was Simon. Oh no not him because he’s already in our list – he’s also called Peter. So still eight to go.
There was Doubting Thomas – that’s five.
But who were the others. Well, there’s James who was John’s brother, and there was Andrew who was Peter’s brother – that’s seven. Just five to go.
So who are those last five? Bonus marks for anyone who remembers Bartholomew, James (the less), Philip, Simon the Canaanite, and Thaddeus.
So hang on, what about all those people who keep saying Jude – the patron saint of desperate causes. Yes he was an Apostle, but, apparently, he is also known as Thaddeus – mainly because early writers didn’t want him being confused with Judas Iscariot (hence the alternative name).
Which brings me nicely back to dealing with Judas Iscariot. Obviously he was an Apostle – one of the 12 – but you can’t imagine early Christians being too keen on including him in a list of revered Apostles! So who became the new twelfth man (if you’ll pardon a sort of cricketing metaphor)? In Acts it is said that following the death of Judas a new twelfth Apostle was appointed. He was called Matthias. He usually appears in lists of the twelve Apostles rather than the disgraced Judas Iscariot. However, some argue that only people actually sent out by Jesus could truly be called Apostles.
Troublingly, The Gospel of John also mentions an Apostle called Nathaniel (see John 1:45-51 and 21:2). Most authorities assume this is an alternative name for Bartholomew – so don’t worry about that one.
St Barnabas (who, confusingly, was originally called Joseph) is referred to as an Apostle in Acts 14:14. His claim to fame was introducing Paul (of road to Damascus fame) to the disciples in Jerusalem. And that’s the lot!
And for lots of bonus marks, can you name the thirteenth Apostle? Of course, there is more than one answer to this question.
Paul (of Tarsus) described himself as an Apostle following his Damascene conversion.
The Roman Emperor Constantine was responsible for making Christianity the official religion in Rome in the 4th century. He is often referred to as the thirteenth Apostle.
Plus, there’s also a long list of people who have brought Christianity to a some particular part of the world, who are referred to as the Apostle of somewhere or Apostle to somewhere else (for example St Augustine, the Apostle to England, or St Patrick, the Apostle to Ireland).
So, a much harder quiz than you might have thought. How many did you know?
If you celebrate it, have a good Easter next week. It’s back to mainframes in two weeks’ time.
Labels:
12,
and Thaddeus,
Andrew,
Apostles,
Bartholomew,
blog,
Doubting Thomas,
Easter,
Eddolls,
James,
James (the less),
John,
Judas Iscariot,
Matthew,
Matthias,
Peter,
Philip,
Simon the Canaanite
Sunday, 6 April 2014
Happy birthday mainframe
7 April marks the 50th anniversary of the mainframe. It was on that day in 1964 that the System/360 was announced and the modern mainframe was born. IBM’s Big Iron, as it came to be called, took a big step ahead of the rest of the BUNCH (Burroughs, UNIVAC, NCR, Control Data Corporation, and Honeywell). The big leap of imagination was to have software that was architecturally compatible across the entire System/360 line.
It was called System/360 to indicate that this new systems would handle every need of every user in the business and scientific worlds because it covered all 360 degrees of the compass. That was a triumph for the marketing team because it would have otherwise been called the rather dull System 500. System/360 could emulate IBM’s older 1401 machines, which encouraged customers to upgrade. Famous names among its designers are Gene Amdahl, Bob Evans, Fred Brooks, and Gerrit Blaauw. Gene Amdahl later created a plug-compatible mainframe manufacturing company – Amdahl.
IBM received plenty of orders and the first mainframe was delivered to Globe Exploration Co. in April 1965. Launching and producing the System/360 cost more than $5 billion, making it the largest privately-financed commercial project up to that time. It was a risky enterprise, but one that worked. From 1965 to 1970, IBM’s revenues went up from $3.6 billion to $7.5 billion; and the number of IBM computer systems installed anywhere tripled from 11,000 to 35,000.
The Model 145 was the first IBM computer to have its main memory made entirely of monolithic circuits. It used silicon memory chips, rather than the older magnetic core technology.
In 1970, the System/370 was introduced. The marketing said that the System/360 was for the 1960s; for the 1970s you needed a System/370. All thoughts of compass points had gone by then. IBM’s revenues went up to $75 billion and employee numbers grew from 120,000 to 269,000, and, at times, customers had a two-year wait to get their hands on a new mainframe.
1979 saw the introduction of the 4341, which was 26 times faster than the System/360 Model 30. The 1980s didn’t have a System/380. But in 1990, the System/390 Model 190 was introduced. This was 353 times faster than the System/360 Model 30.
1985 saw the introduction of the Enterprise System/3090, which had over one-million-bit memory chips and came with Thermal Conduction Modules to speed chip-to-chip communication times. Some machines had a Vector Facility, which made them faster. It replaced the ES/3080
The 1990s weren’t a good time for mainframes. For example, in March 1991, Stewart Alsop stated: “I predict that the last mainframe will be unplugged on March 15, 1996.” Not the most successful prediction, but definitely catching the zeitgeist of the time. It was the decade of the System/390 (back to the old style naming convention). We saw the introduction of high-speed fibre optic mainframe channel architecture Enterprise System Connection (ESCON).
The System/360 gave us 24-bit addressing (32-bit architecture) and virtual storage. The System/370 gave us multi-processor support and then extended storage 24-bit/31-bit addressing. With System/390 we got the OS/390 operating system. As we move into the 2000s, we got zSeries (zArchitecture) and z operating systems giving us 24, 31, and 64-bit addressing. In 2003, the z990 was described as, “the world's most sophisticated server”. In 2005 we got the zIIP specialty engine. In 2008 it was the z10 EC with high capacity/performance (quad core CPU chip). In 2010 the z196 (zEnterprise) had 96-way core design and distributed systems integration (zBX). In 2012, the zEC12 was described as an integrated platform for cloud computing, with integrated OLTP and data warehousing. In 2000 IBM said it would support Linux on the mainframe, and, by 2009, 70 of IBM’s top 100 mainframe customers were estimated to be running Linux. A zEnterprise mainframe can run 100,000 virtual Linux servers. Modern mainframes run Java and C++. And, the latest mainframe is compatible with the earliest System/360, which means that working code written in 1964 will still run on the latest z12BC.
In terms of operating systems, OS/360 was replaced by MVT, which became OS/VS2 SVS, and then OS/VS2 MVS. That became MVS/SE, which became MVS/SP, which became MVS/XA and then MVS/ESA before becoming OS/390 and finally z/OS.
And what does the future look like – with people migrating to other platforms and the technically-qualified mainframers beginning to look quite long in the tooth? The answer is rosey! Mainframe applications are moving from their traditional green screens to displays that look like anything you’d find on a Windows or Linux platform. They can provide cloud-in-a-box capability. They can integrate with other platforms. They can derive information from Big Data and the Internet of Things. Their biggest problem is, perhaps, decision makers at organizations don’t value them enough. Surveys that identify the cost benefits of running huge numbers of IMS or CICS transactions on a mainframe compared to other platforms are generally ignored. Many people think of them as “your dad’s technology”, and high-profile organizations like NASA are unplugging them.
So, although they are often misperceived as dinosaurs, they are in fact more like quick-witted and agile mammals. They provide bang up-to-date technology at a price that is value for money. I predict that we will be celebrating the mainframe’s 60th birthday and its 70th, though we may not be able to imagine how compact it will be by then and what new capabilities it will have.
Happy birthday mainframe.
It was called System/360 to indicate that this new systems would handle every need of every user in the business and scientific worlds because it covered all 360 degrees of the compass. That was a triumph for the marketing team because it would have otherwise been called the rather dull System 500. System/360 could emulate IBM’s older 1401 machines, which encouraged customers to upgrade. Famous names among its designers are Gene Amdahl, Bob Evans, Fred Brooks, and Gerrit Blaauw. Gene Amdahl later created a plug-compatible mainframe manufacturing company – Amdahl.
IBM received plenty of orders and the first mainframe was delivered to Globe Exploration Co. in April 1965. Launching and producing the System/360 cost more than $5 billion, making it the largest privately-financed commercial project up to that time. It was a risky enterprise, but one that worked. From 1965 to 1970, IBM’s revenues went up from $3.6 billion to $7.5 billion; and the number of IBM computer systems installed anywhere tripled from 11,000 to 35,000.
The Model 145 was the first IBM computer to have its main memory made entirely of monolithic circuits. It used silicon memory chips, rather than the older magnetic core technology.
In 1970, the System/370 was introduced. The marketing said that the System/360 was for the 1960s; for the 1970s you needed a System/370. All thoughts of compass points had gone by then. IBM’s revenues went up to $75 billion and employee numbers grew from 120,000 to 269,000, and, at times, customers had a two-year wait to get their hands on a new mainframe.
1979 saw the introduction of the 4341, which was 26 times faster than the System/360 Model 30. The 1980s didn’t have a System/380. But in 1990, the System/390 Model 190 was introduced. This was 353 times faster than the System/360 Model 30.
1985 saw the introduction of the Enterprise System/3090, which had over one-million-bit memory chips and came with Thermal Conduction Modules to speed chip-to-chip communication times. Some machines had a Vector Facility, which made them faster. It replaced the ES/3080
The 1990s weren’t a good time for mainframes. For example, in March 1991, Stewart Alsop stated: “I predict that the last mainframe will be unplugged on March 15, 1996.” Not the most successful prediction, but definitely catching the zeitgeist of the time. It was the decade of the System/390 (back to the old style naming convention). We saw the introduction of high-speed fibre optic mainframe channel architecture Enterprise System Connection (ESCON).
The System/360 gave us 24-bit addressing (32-bit architecture) and virtual storage. The System/370 gave us multi-processor support and then extended storage 24-bit/31-bit addressing. With System/390 we got the OS/390 operating system. As we move into the 2000s, we got zSeries (zArchitecture) and z operating systems giving us 24, 31, and 64-bit addressing. In 2003, the z990 was described as, “the world's most sophisticated server”. In 2005 we got the zIIP specialty engine. In 2008 it was the z10 EC with high capacity/performance (quad core CPU chip). In 2010 the z196 (zEnterprise) had 96-way core design and distributed systems integration (zBX). In 2012, the zEC12 was described as an integrated platform for cloud computing, with integrated OLTP and data warehousing. In 2000 IBM said it would support Linux on the mainframe, and, by 2009, 70 of IBM’s top 100 mainframe customers were estimated to be running Linux. A zEnterprise mainframe can run 100,000 virtual Linux servers. Modern mainframes run Java and C++. And, the latest mainframe is compatible with the earliest System/360, which means that working code written in 1964 will still run on the latest z12BC.
In terms of operating systems, OS/360 was replaced by MVT, which became OS/VS2 SVS, and then OS/VS2 MVS. That became MVS/SE, which became MVS/SP, which became MVS/XA and then MVS/ESA before becoming OS/390 and finally z/OS.
And what does the future look like – with people migrating to other platforms and the technically-qualified mainframers beginning to look quite long in the tooth? The answer is rosey! Mainframe applications are moving from their traditional green screens to displays that look like anything you’d find on a Windows or Linux platform. They can provide cloud-in-a-box capability. They can integrate with other platforms. They can derive information from Big Data and the Internet of Things. Their biggest problem is, perhaps, decision makers at organizations don’t value them enough. Surveys that identify the cost benefits of running huge numbers of IMS or CICS transactions on a mainframe compared to other platforms are generally ignored. Many people think of them as “your dad’s technology”, and high-profile organizations like NASA are unplugging them.
So, although they are often misperceived as dinosaurs, they are in fact more like quick-witted and agile mammals. They provide bang up-to-date technology at a price that is value for money. I predict that we will be celebrating the mainframe’s 60th birthday and its 70th, though we may not be able to imagine how compact it will be by then and what new capabilities it will have.
Happy birthday mainframe.
Labels:
50,
7 April,
anniversary,
birthday,
blog,
Bob Evans,
Eddolls,
Fred Brooks,
Gene Amdahl,
Gerrit Blaauw,
Globe Exploration Co,
IBM,
mainframe,
System/360
Sunday, 30 March 2014
Tell me about this Yammer thing
I’ve been to a few companies recently that have been using Yammer as a business tool. If you’ve got offices that are spread out, or if your workforce aren’t usually in the office, then it provides an easy way for people to be able to share things – like comments, documents, or images. And you can form groups so discussions, that are only relevant to a small group of people, stay within that small group or team.
Yammer started life in 2008 and was bought by Microsoft in 2012. It’s described as an enterprise social network. That means it’s not a public social network like Facebook, it’s for internal communication between members of an organization or group.
It’s free, it’s very easy to use (if you’ve ever used Facebook), and it provides a private and secure place for discussion. The simplest way to use Yammer is from your browser (Explorer, Firefox, Chrome, etc), and you can download the app for your smartphone or tablet.
It’s easy to set up and use, but I thought I’d put together some instructions for new users, so they know how to get on and start using it.
To sign up, go to www.yammer.com. You’ll see a large box in the middle of the page:
Type in your company e-mail address – you can’t use your personal e-mail address because it won’t work.
Complete your Yammer profile and add a photo. New people in your organization may not be familiar with who you are and your particular skill set.
You can join groups and follow topics that are relevant to you. If Yammer gets very busy with people posting, you won’t want to be informed every time there’s a new post. So, click on the three dots in the upper right-hand corner. In the drop-down menu, select ‘Edit Profile’. Then select ‘Notifications’ from the list on the left, and then choose how often you want to receive notifications. ‘Save’ your choice. There’s a ‘Back Home’ box top-left to get back.
You can also follow other people – that way you get to see what they’re posting.
When you come to use Yammer on subsequent occasions, you simply click on ‘Log In’ on the right of the top menu bar.
Now you can start to use Yammer.
You can post messages – these can be comments, questions, updates. You can post links to articles or blogs elsewhere on the Web.
You can follow people, which means that you want to view messages from them in ‘My Feed’. It’s not like a friend request. They don’t have to agree. They don’t have to follow you back.
You can read what other people are posting and get a feel of what’s going on across the organization.
You can ‘Like’ other people’s posts.
You can find out more about people in your organization by reading their profile.
You could start your own group or join existing groups.
You can upload pictures. You can organize events/meetings. You can survey what people think about things
You can use topics so that all the posts are around a specific topic. To add a topic to a post, click ‘add topic’ while writing the message or you can use a hashtag. You can also add topics to a published message by clicking ‘more’. Hashtags (#) are used to identify what posts are about and to make finding information easier.
You can search for information in the search box near the top of the page. This will find whether anyone else has posted about a particular topic.
And you can send a direct message in three ways. Use the @ sign followed by the user’s names. As you start to type the name, a drop-down menu will give you suggestions. You can send a private message:
Unbelievably, Yammer refers to all communications inside Yammer as “Yams”. Yams are sorted into various feeds. A feed, if you’re new to social media, is a way of keeping you up-to-date with content that other people are posting.
I think many organizations would benefit from an internal social media tool. There are alternatives to Yammer available, but I think it can be very useful within an organization to help with communication. And it can be fun!
Yammer started life in 2008 and was bought by Microsoft in 2012. It’s described as an enterprise social network. That means it’s not a public social network like Facebook, it’s for internal communication between members of an organization or group.
It’s free, it’s very easy to use (if you’ve ever used Facebook), and it provides a private and secure place for discussion. The simplest way to use Yammer is from your browser (Explorer, Firefox, Chrome, etc), and you can download the app for your smartphone or tablet.
It’s easy to set up and use, but I thought I’d put together some instructions for new users, so they know how to get on and start using it.
To sign up, go to www.yammer.com. You’ll see a large box in the middle of the page:
Type in your company e-mail address – you can’t use your personal e-mail address because it won’t work.
Complete your Yammer profile and add a photo. New people in your organization may not be familiar with who you are and your particular skill set.
You can join groups and follow topics that are relevant to you. If Yammer gets very busy with people posting, you won’t want to be informed every time there’s a new post. So, click on the three dots in the upper right-hand corner. In the drop-down menu, select ‘Edit Profile’. Then select ‘Notifications’ from the list on the left, and then choose how often you want to receive notifications. ‘Save’ your choice. There’s a ‘Back Home’ box top-left to get back.
You can also follow other people – that way you get to see what they’re posting.
When you come to use Yammer on subsequent occasions, you simply click on ‘Log In’ on the right of the top menu bar.
Now you can start to use Yammer.
You can post messages – these can be comments, questions, updates. You can post links to articles or blogs elsewhere on the Web.
You can follow people, which means that you want to view messages from them in ‘My Feed’. It’s not like a friend request. They don’t have to agree. They don’t have to follow you back.
You can read what other people are posting and get a feel of what’s going on across the organization.
You can ‘Like’ other people’s posts.
You can find out more about people in your organization by reading their profile.
You could start your own group or join existing groups.
You can upload pictures. You can organize events/meetings. You can survey what people think about things
You can use topics so that all the posts are around a specific topic. To add a topic to a post, click ‘add topic’ while writing the message or you can use a hashtag. You can also add topics to a published message by clicking ‘more’. Hashtags (#) are used to identify what posts are about and to make finding information easier.
You can search for information in the search box near the top of the page. This will find whether anyone else has posted about a particular topic.
And you can send a direct message in three ways. Use the @ sign followed by the user’s names. As you start to type the name, a drop-down menu will give you suggestions. You can send a private message:
- Click ‘Inbox’ in the left column.
- Click ‘Create Message’ on the right sidebar.
- Select ‘Send Private Message’.
- In the ‘Add Participants’ field, start to type the person’s user name. A drop-down list of matching user names appears.
- Select the name of the name of the person you want to send the message to.
- Write your message, and then ‘Send’.
- Click ‘Online Now’ in the bottom-right corner.
- Start writing the person’s name. A drop-down list of matching user names appears.
- Use the up and down arrows, and ‘Enter’ to select a name. A message box opens.
- Write your message, and then ‘Send’.
Unbelievably, Yammer refers to all communications inside Yammer as “Yams”. Yams are sorted into various feeds. A feed, if you’re new to social media, is a way of keeping you up-to-date with content that other people are posting.
I think many organizations would benefit from an internal social media tool. There are alternatives to Yammer available, but I think it can be very useful within an organization to help with communication. And it can be fun!
Labels:
blog,
Eddolls,
Facebook,
Microsoft in 2012,
social media,
Yammer,
Yams
Subscribe to:
Posts (Atom)