Office 365 with its cloud-based solution to all your office needs seems like a mature and all-encompassing way of moving IT forward at many organizations. But what if your organization isn’t big enough to justify the price tag of the Enterprise version of Office 365, what if you’re a school, for example, what other choices do you have? Well, let’ take a look at some Open Source alternatives.
The obvious first place to look is Google Apps for Business. It’s not free, it costs $5 per user per month, or $50 per user per year. Google’s definition of a user is a distinct Gmail inbox. Everyone gets 30GB of Drive space, as well as Docs, Sheets, and Slides. Documents created in Drive can be shared both individually or by organization. Google Sites lets you create Web sites. Google Apps Vault is used to keep track of information created, stored, sent, and received through an organization’s Google Apps programs. You can access Apps for Business from mobile devices using Google’s mobile apps. One of the best apps for this is arguably the QuickOffice app for Android and iOS, which allows users. QuickOffice can edit Microsoft Office files stored in Drive. If you are a school, Google Apps for Education is completely free and has the new ‘Classroom’ product coming soon.
There’s also Zoho, which provides the standard office tools as well as a Campaigns tool, which lets you create e-mail and social campaigns for forthcoming events. Then there’s Free Office, which needs Java. And there’s OX, which offers files, e-mail, address book, calendar, tasks, plus a social media portal.
If you’re looking for an alternative to Outlook, there’s obviously Gmail, Yahoo, and Outlook (Hotmail). For Open Source alternatives, there’s Thunderbird, which comes with the numerous Add-ons, eg Lightening, its calendar software. There’s also Zimbra Desktop. It can access e-mail and data from VMware Zimbra, cloud e-mail like Gmail, and social networks like Facebook. And there’s Incredimail, but it doesn’t have a calendar. And finally there’s the Opera Email Client.
If you want an alternative to SharePoint then, probably, your first choice is Google Cloud Connect. This simple plug-in connects you to Google Docs and let you collaborate with other people. Edited documents are automatically sync’ed and sent to other team members. Or you might look at Alfresco. This free platform allows users to collaborate on documents and interact with others. There’s also Samepage, which comes with a paid for option. Or you could try Liferay Social Office Community Edition (CE). This is a downloadable desktop application. And there’s Nuxeo Open-Source CMS.
If you’re looking for an intranet, then there’s Mindtouch core, which seems to get good reviews. Alternatives include PBWiki, which includes a wiki, and is hosted by them and isn’t free for businesses. There’s GlassCubes, an online collaborative workspace, which, again has a cost implication. There’s Plone, a content management system built on Zope. There’s also Brushtail, HyperGate, Open Atrium, and Twiki.
The bottom line is that if you have power users, who are using lots of the features of Word, Excel, and PowerPoint, then you need those products. If your users are only scratching the surface of what you can do with Microsoft’s Office suite, then it makes sense to choose a much cheaper alternative. If you can use alternatives to Office, then you can probably start to think about using alternatives to other Microsoft products. Perhaps you can live without Outlook for e-mail and calendar. Maybe you’ve never really made the most of SharePoint and you could use an Open Source alternative for file sharing and running your intranet.
The issue is this: you can save huge amounts of money by using Open Source products rather than Office 365, but you will need to spend time learning how to use each ‘best of breed’ alternative and how to integrate it with the other products. That will take up someone’s time. Once you’ve weighed up the pros and cons, you can make a decision about whether to keep the faith and stay with Microsoft and have a great CV for another job using Microsoft products, or whether to save money and spend lots of time as you take your first steps into the wilderness. But what you’ll find is that wilderness is quite full of people who’ve also stepped away from the easy choice.
What will you do?
Sunday, 29 June 2014
Sunday, 22 June 2014
Plus ca change
I’ve worked with mainframes for over 30 years and I’m used to seeing trends moving in one direction and then, a few years later, going in the opposite direction. Each initiative gets sold to us as something completely new and the solution that we’ve been waiting. I imagine you share my experience. I originally worked on green screens with all the processing taking place on the mainframe. In fact, I can remember decks of cards being punched and fed into the mainframe. I can remember the excitement of everyone having their own little computer when PCs first came out. I can remember client/server being the ultimate answer to all our computing issues. Outsourcing, not outsourcing – we could wander down Memory Lane like this for a long time.
What always amazes me is when I’m working with sites that are predominantly Windows-based, and they still get that frisson of excitement over an idea that I think is pretty commonplace. It was only a few years ago (well maybe about five) that the Windows IT teams were all excited about VMware and the ability to virtualize hardware. They couldn’t believe mainframes had been doing that since the 1960s.
Then there was the excitement about using Citrix and giving users simple Linux terminals rather than more expensive PCs. Citrix have a host of products, including GoToMeeting – their conferencing software. With Citrix desktop solutions, all the applications live on the server rather than on each individual computer. It means you can launch a browser on your laptop, smartphone, tablet, or whatever device you like that has a browser, and see a Windows-looking desktop and all the usual applications. So, it’s just like a dumb terminal connecting to a mainframe, which does all the work and looks after all the data storage. Nothing new there!
And now Microsoft are selling Office 365, which, once you’ve paid your money, means that all the applications live in the cloud somewhere, and so does the data. It seems that all subscribers are like remote users, dialling into an organization’s mainframe that could be located in a different country or on a different continent. Looked at another way, IT departments are in many ways outsourcing their responsibilities – and we all remember when outsourcing was on everyone’s mind.
Office 365 seems like a very mature product and one whose time is about to come. You get more than just the familiar Office products like Word and Excel. You get SharePoint, Lync, and Exchange (and I’m talking about the Enterprise version of Office 365). Lync lets users chat to each other – a bit like using MSN used to. And SharePoint provides you with an intranet as well file and document management capabilities. You get Outlook, Publisher (my least-favourite piece of software), Access (the database), and InfoPath (used for electronic forms). You also get a nicely integrated Yammer – Microsoft’s Enterprise Social Networking (ESN) tool. There’s also PowerBI, a suite of business intelligence and self-serve data mining tools coming soon. This will integrate with Excel, so users can use the Power Query tool to create spreadsheets and graphs using public and private data, and also perform geovisualization with Bing Maps data using the Power Map tool.
And while the actual tools that are available on these different platforms and computing models, over time, are different, it’s the computing concepts that I’m suggesting come and go and come again, and go again! It’s like a battle between centralization and decentralization. Everyone likes to have that computing power on their phone or tablet, but whenever you need to do some real work, you connect (usually using a browser) to a distant computing monolith. So, plus ça change, plus c’est la même chose.
What always amazes me is when I’m working with sites that are predominantly Windows-based, and they still get that frisson of excitement over an idea that I think is pretty commonplace. It was only a few years ago (well maybe about five) that the Windows IT teams were all excited about VMware and the ability to virtualize hardware. They couldn’t believe mainframes had been doing that since the 1960s.
Then there was the excitement about using Citrix and giving users simple Linux terminals rather than more expensive PCs. Citrix have a host of products, including GoToMeeting – their conferencing software. With Citrix desktop solutions, all the applications live on the server rather than on each individual computer. It means you can launch a browser on your laptop, smartphone, tablet, or whatever device you like that has a browser, and see a Windows-looking desktop and all the usual applications. So, it’s just like a dumb terminal connecting to a mainframe, which does all the work and looks after all the data storage. Nothing new there!
And now Microsoft are selling Office 365, which, once you’ve paid your money, means that all the applications live in the cloud somewhere, and so does the data. It seems that all subscribers are like remote users, dialling into an organization’s mainframe that could be located in a different country or on a different continent. Looked at another way, IT departments are in many ways outsourcing their responsibilities – and we all remember when outsourcing was on everyone’s mind.
Office 365 seems like a very mature product and one whose time is about to come. You get more than just the familiar Office products like Word and Excel. You get SharePoint, Lync, and Exchange (and I’m talking about the Enterprise version of Office 365). Lync lets users chat to each other – a bit like using MSN used to. And SharePoint provides you with an intranet as well file and document management capabilities. You get Outlook, Publisher (my least-favourite piece of software), Access (the database), and InfoPath (used for electronic forms). You also get a nicely integrated Yammer – Microsoft’s Enterprise Social Networking (ESN) tool. There’s also PowerBI, a suite of business intelligence and self-serve data mining tools coming soon. This will integrate with Excel, so users can use the Power Query tool to create spreadsheets and graphs using public and private data, and also perform geovisualization with Bing Maps data using the Power Map tool.
And while the actual tools that are available on these different platforms and computing models, over time, are different, it’s the computing concepts that I’m suggesting come and go and come again, and go again! It’s like a battle between centralization and decentralization. Everyone likes to have that computing power on their phone or tablet, but whenever you need to do some real work, you connect (usually using a browser) to a distant computing monolith. So, plus ça change, plus c’est la même chose.
Labels:
blog,
Citrix,
client/server,
Eddolls,
GoToMeeting,
Linux,
mainframe,
Microsoft,
Office 365,
Outsourcing,
VMware,
Windows
Sunday, 15 June 2014
Having your cake and eating it
Everyone knows that mainframes are the best computers you can have. You can run them locally, you can hide them in the cloud, and you can link them together into a massive processing network. But we also know that there are smaller platforms out there that work differently. Wouldn’t it be brilliant if you could run them all from one place?
Last summer we were excited by the announcement from IBM of its new zBC12 mainframe computer. The zBC12 followed the previous year’s announcement of the zEC12 (Enterprise Class), and 2011 saw the z114, with 2010 giving us the z196. So what’s special about those mainframes?
Well, in addition to IFL, zIIP, and zAAP specialty processors, and massive amounts of processing power, they came with the IBM zEnterprise BladeCenter Extension (zBX), which lets users combine workloads designed for mainframes with those for POWER7 and x86 chips, like Microsoft Windows Server. So let’s unpick this a little.
One issue many companies have after years of mergers and acquisitions is a mixed bag of operating systems and platforms. They could well have server rooms across the world and not really know what was running on those servers.
IBM’s first solution to this problem was Linux on System z. Basically, a company could take hundreds of Linux servers and consolidate them onto a single mainframe. They would save on energy to drive the servers and cool them, and they would get control of their IT back.
IBM’s second solution, as we’ve just described, was to incorporate the hardware for Linux and Windows servers in its mainframe boxes. You’d just plug in the blades that you needed and you had full control over your Windows servers again (plus all the benefits of having a mainframe).
But what about if you could actually run Windows on your mainframe? That was the dream of some of the people at Mantissa Corporation. They did a technology demo at SHARE in August 2009. According to Mantissa’s Jim Porell, Network World read their abstract and incorrectly assumed that they were announcing a product – which they weren’t. The code is still in beta. But think about what it could mean: running all your Windows servers on a mainframe. That is quite a concept.
Again, according to Jim, they can now have real operating systems running under their z86VM, although, so far, they are the free versions of Linux. Their next step will be to test it with industrial strength Linux distros such as Red Hat and Suse. And then, they will need to get Windows running. And then they’ll have a product.
Speaking frankly, Jim said that currently they have a bug in their Windows ‘BIOS’ processing in the area of plug-and-play hardware support. Their thinking is that it’s a mistake in their interpretation of the hardware commands and, naturally, they’re working to resolve it.
The truth is that it’s still early days for the project, and while running Linux is pretty good, we can already do that on a mainframe (although you might quibble at the price tag for doing so). But once the Mantissa technical people have cracked the problems with Windows, it will be a product well worth taking a look at, I’m sure. But they’re not there yet, and they’re keen to manage expectations appropriately.
Jim goes on to say that once the problems are solved it’s going to be about performance. Performance is measured in a couple of ways: benchmarks of primitives, benchmarks of large-scale for capacity planning, end user experiences, and what they are willing to tolerate. So it seems that they have business objectives around performance where they could be successful if they supported only 10 PCs, then more successful with a 100 PCs, and even have even greater success if they can support a 1000 PCs.
Jim Porell describes z86VM as really just an enabler to solving a wide range of ‘customer problems’ by enabling direct access between the traditional mainframe and the PC operating systems that are co-located with it.
I think that anything that gets more operating systems running on mainframe hardware has got to be good. And I’m prepared to wait for however long it takes Mantissa to get Windows supported on a mainframe. I’ll definitely feel then that I’m having my cake and eating it!
Last summer we were excited by the announcement from IBM of its new zBC12 mainframe computer. The zBC12 followed the previous year’s announcement of the zEC12 (Enterprise Class), and 2011 saw the z114, with 2010 giving us the z196. So what’s special about those mainframes?
Well, in addition to IFL, zIIP, and zAAP specialty processors, and massive amounts of processing power, they came with the IBM zEnterprise BladeCenter Extension (zBX), which lets users combine workloads designed for mainframes with those for POWER7 and x86 chips, like Microsoft Windows Server. So let’s unpick this a little.
One issue many companies have after years of mergers and acquisitions is a mixed bag of operating systems and platforms. They could well have server rooms across the world and not really know what was running on those servers.
IBM’s first solution to this problem was Linux on System z. Basically, a company could take hundreds of Linux servers and consolidate them onto a single mainframe. They would save on energy to drive the servers and cool them, and they would get control of their IT back.
IBM’s second solution, as we’ve just described, was to incorporate the hardware for Linux and Windows servers in its mainframe boxes. You’d just plug in the blades that you needed and you had full control over your Windows servers again (plus all the benefits of having a mainframe).
But what about if you could actually run Windows on your mainframe? That was the dream of some of the people at Mantissa Corporation. They did a technology demo at SHARE in August 2009. According to Mantissa’s Jim Porell, Network World read their abstract and incorrectly assumed that they were announcing a product – which they weren’t. The code is still in beta. But think about what it could mean: running all your Windows servers on a mainframe. That is quite a concept.
Again, according to Jim, they can now have real operating systems running under their z86VM, although, so far, they are the free versions of Linux. Their next step will be to test it with industrial strength Linux distros such as Red Hat and Suse. And then, they will need to get Windows running. And then they’ll have a product.
Speaking frankly, Jim said that currently they have a bug in their Windows ‘BIOS’ processing in the area of plug-and-play hardware support. Their thinking is that it’s a mistake in their interpretation of the hardware commands and, naturally, they’re working to resolve it.
The truth is that it’s still early days for the project, and while running Linux is pretty good, we can already do that on a mainframe (although you might quibble at the price tag for doing so). But once the Mantissa technical people have cracked the problems with Windows, it will be a product well worth taking a look at, I’m sure. But they’re not there yet, and they’re keen to manage expectations appropriately.
Jim goes on to say that once the problems are solved it’s going to be about performance. Performance is measured in a couple of ways: benchmarks of primitives, benchmarks of large-scale for capacity planning, end user experiences, and what they are willing to tolerate. So it seems that they have business objectives around performance where they could be successful if they supported only 10 PCs, then more successful with a 100 PCs, and even have even greater success if they can support a 1000 PCs.
Jim Porell describes z86VM as really just an enabler to solving a wide range of ‘customer problems’ by enabling direct access between the traditional mainframe and the PC operating systems that are co-located with it.
I think that anything that gets more operating systems running on mainframe hardware has got to be good. And I’m prepared to wait for however long it takes Mantissa to get Windows supported on a mainframe. I’ll definitely feel then that I’m having my cake and eating it!
Sunday, 8 June 2014
Conversational Monitoring System
I do a lot of work with Web sites and people are often talking about using a CMS to make it easier to upload and place content. For them, CMS stands for Content Management System, but I thought it would be fun to revisit Conversational Monitor System – one of the first virtual machines that people really enjoyed using because of its flexibility and functionality.
Our story starts in the mid-1960s at IBM’s Cambridge Scientific Center, where CMS – then called Cambridge Monitor System – first saw the light of day running under CP-40, a VM-like control program, which then developed into CP-67. It provided a way of giving every user the appearance of working on their own computer system. So, every user had their own terminal (the system console for their system) as if they had their own processor, unit record device (what we’d call a printer, card reader, and card punch), and DASD (disk space). The control program looked after the real resources and allocated them, as required, to the users.
In 1972, when IBM released its 370-architecture machine, a new version of VM called VM/370 became available. Unlike CP/CMS, IBM supported VM/370 and CMS retained its initials, but was now known as Conversational Monitor System – highlighting its interactive nature. This commercial product also included RSCS (Remote Spooling Communication Subsystem) and IPCS (Interactive Problem Control System), which ran under CMS.
VM itself was originally not very popular within IBM, but, through quite an interesting story, survived. VM/370 became VM/SE and then VM/SP. There was also a low-end variant called VM/IS. Then there was VM/SP HPO before we had VM/XA SF, VM/XA SP, then VM/ESA, and now z/VM.
But returning to CMS, it was popular because you could do so much with it. You could develop, debug, and run programs, manage data files, communicate with other systems or users, and much more. When you start (IPL CMS) CMS, it loads a profile exec, which sets up your virtual environment in exactly the way that you want it to be.
Two products that made CMS users very happy were PROFS and REXX. PROFS (PRofessional OFfice System) became available in 1981 and was originally developed by IBM in Dallas, in conjunction with Amoco. It provided e-mail, shared calendars, and shared document storage and management. It was so popular that it was renamed OfficeVision and ported to other platforms. OfficeVision/VM was dropped in 2003, with IBM recommending that users migrate to Lotus Notes and Domino, which it had acquired by taking over Lotus.
REXX (Restructured Extended Executor) is an interpreted programming language developed by Mike Cowlishaw and released by IBM in 1982. It is a structured, high-level programming language that's very easy to get the hang of. In fact, it was so popular, that it was ported to most other platforms. REXX programs are often called REXX EXECs because it replaced EXEC and EXEC2 as the command language of choice.
CMS users will remember XEDIT, which is an edit program written by Xavier de Lamberterie that was released in 1980. XEDIT supports automatic line numbers, and many of the commands operate on blocks of lines. The command line allows user to type editor commands. It replaced EDIT SP as the standard editor. Again, XEDIT was very popular and ported to other platforms.
CMS provided a way to maximize the number of people who could concurrently use mainframe facilities at a time when these facilities were fairly restricted. It was a hugely successful environment, spawning tools that themselves were ported to other platforms because they were so successful. I used CMS and VM a lot back in the day, and even wrote two books about VM. Like many users, I have very fond memories of using CMS and what could be achieved by using CMS.
Our story starts in the mid-1960s at IBM’s Cambridge Scientific Center, where CMS – then called Cambridge Monitor System – first saw the light of day running under CP-40, a VM-like control program, which then developed into CP-67. It provided a way of giving every user the appearance of working on their own computer system. So, every user had their own terminal (the system console for their system) as if they had their own processor, unit record device (what we’d call a printer, card reader, and card punch), and DASD (disk space). The control program looked after the real resources and allocated them, as required, to the users.
In 1972, when IBM released its 370-architecture machine, a new version of VM called VM/370 became available. Unlike CP/CMS, IBM supported VM/370 and CMS retained its initials, but was now known as Conversational Monitor System – highlighting its interactive nature. This commercial product also included RSCS (Remote Spooling Communication Subsystem) and IPCS (Interactive Problem Control System), which ran under CMS.
VM itself was originally not very popular within IBM, but, through quite an interesting story, survived. VM/370 became VM/SE and then VM/SP. There was also a low-end variant called VM/IS. Then there was VM/SP HPO before we had VM/XA SF, VM/XA SP, then VM/ESA, and now z/VM.
But returning to CMS, it was popular because you could do so much with it. You could develop, debug, and run programs, manage data files, communicate with other systems or users, and much more. When you start (IPL CMS) CMS, it loads a profile exec, which sets up your virtual environment in exactly the way that you want it to be.
Two products that made CMS users very happy were PROFS and REXX. PROFS (PRofessional OFfice System) became available in 1981 and was originally developed by IBM in Dallas, in conjunction with Amoco. It provided e-mail, shared calendars, and shared document storage and management. It was so popular that it was renamed OfficeVision and ported to other platforms. OfficeVision/VM was dropped in 2003, with IBM recommending that users migrate to Lotus Notes and Domino, which it had acquired by taking over Lotus.
REXX (Restructured Extended Executor) is an interpreted programming language developed by Mike Cowlishaw and released by IBM in 1982. It is a structured, high-level programming language that's very easy to get the hang of. In fact, it was so popular, that it was ported to most other platforms. REXX programs are often called REXX EXECs because it replaced EXEC and EXEC2 as the command language of choice.
CMS users will remember XEDIT, which is an edit program written by Xavier de Lamberterie that was released in 1980. XEDIT supports automatic line numbers, and many of the commands operate on blocks of lines. The command line allows user to type editor commands. It replaced EDIT SP as the standard editor. Again, XEDIT was very popular and ported to other platforms.
CMS provided a way to maximize the number of people who could concurrently use mainframe facilities at a time when these facilities were fairly restricted. It was a hugely successful environment, spawning tools that themselves were ported to other platforms because they were so successful. I used CMS and VM a lot back in the day, and even wrote two books about VM. Like many users, I have very fond memories of using CMS and what could be achieved by using CMS.
Subscribe to:
Posts (Atom)