Wont Get Fooled Again
By Scott Hirsh
Now that the shock of HPs end-of-life decision has worn off, its come to this: was it a worst practice to be an HP 3000 user the last few years? Should we have read the warning signs and prepared for the inevitable? What could we have done differently and how do we avoid painting ourselves into a technology corner in the future?
We now know more than ever that there is no loyalty on either side of the bargaining table. The IT culture of planned obsolescence has accelerated over the last five years of the dot-com boom and bust, and any hope that a technology vendor will watch out for the customer is laughable at best. Both Compaq and HP customers now have legitimate gripes regarding whats in and whats out, and most other vendors are just as bad. Ironically, in the computing arena, its IBM who seems to protect its customers the best. I say ironically, because the shop I managed for 12 years was an IBM System 3 to HP 3000 Series III conversion. Who could have imagined?
I have been selling networked storage for the past two years, and I was struck this year by the brazenness of one vendors end-of-life announcement almost immediately after a competition I lost. The customer chose a competitor in May, only to learn that the equipment he bought would be officially end-of-lifed in July with not even parts and maintenance after June 2003. Folks, its a jungle out there!
So what can we do as technology managers to minimize risk and protect ourselves when designing our IT environments going forward? The key, as youll see, is a concept that vendors understand intimately transfer cost. Transfer cost is the cost of changing from one vendor or platform to another. For example, switching long distance carriers involves low transfer cost. You hardly know it happens. But changing from one operating system to another e.g., HP 3000 to Solaris means high transfer costs. Vendors try to make transfer costs high, without being obvious about it, to discourage customers from switching to competitors. It is your job to identify potential transfer costs in your technology decisions, and to keep them as low as possible.
Mix and Match
For the longest time I enjoyed being an HP 3000 user and an HP customer. Rather than see the HP 3000 as a proprietary platform that I was locked into, I looked it as an integrated platform where everything was guaranteed (more or less) to work together unlike the emerging PC world where getting all the various components to work together was a nightmare. But around the time that HP decided commercial Unix was the next big thing, the concept of heterogeneous computing was reaching critical mass. As discussed in my last column, the glory days of the HP 3000 were just too easy. IT decision makers seemed to have a complexity death wish, and were now living with this legacy. Consequently, the way to lessen risk today, in my opinion, is to spread the risk over multiple platforms and vendors. Trust no one, and always have a Plan B.
This means making the assumption that everything that has been happening for the past few years vendor consolidation, commoditization of hardware and the subordination of operating system to DBMS and application will continue unabated.
The days of the HP shop are over. Even if you decide to standardize on HP or Sun or IBM, you should do so with the knowledge that one day you may need to switch gears abruptly. In other words, these companies are noted for their hardware, which you must be prepared to dump for another brand with as little pain as possible.
Separation of OS and Hardware
When the concept of hardware independence first manifested itself in the form of Posix, I was intrigued. Is this too good to be true, the user community having the upper hand on its technology destiny? Perhaps not the holy grail of binary compatibility among Unix flavors, but a quick recompile and hello new hardware. Well, it was too good to be true and nobodys talked about Posix lately that Ive heard anyway. Likewise for Java. Write once, run everywhere slowly. Yes, there are lots of handy applets and specialized tools that are Java based, but many of these Java applications use extensions the scourge of openness.
As I see it, there are two main operating systems that facilitate hardware independence: Linux and Windows. Each has its issues, from the standpoint of transfer costs. Linux, of course, comes in several flavors; all based on the same kernel, but tweaked just enough to derail binary compatibility. (Cant we all just get along?) And Windows is from Microsoft, who knows something about locking people in then shaking them down. But while these two options are not without their problems, they represent at least the short-term future of computing.
Of the two hardware independent operating system solutions, Linux seems to me the better story. Clearly the flavor is a major decision, with Red Hat having the most support from major hardware vendors. But I have seen other distributions notably SUse adopted in large organizations, so dont necessarily assume only one choice. Linux, as appealing as it is, still has some catching up to do in terms of scalability to match the major Unix brands. But the effort I have seen lately from IBM and Dell, points clearly to Linux as a player in even the largest environments. Again, the idea is not to turn this into Linux everywhere discussion, but to illustrate the concept of Linux as a means of avoiding being painted into a corner.
For example, assuming a Linux solution scales sufficiently to meet your needs, you have the flexibility of running on HP, IBM (even mainframe!), and Dell to name a few. You can run Oracle 9i RAC. And the most popular applications follow Oracle. If youre really adventurous, you can pursue open source software like MySQL and completely roll your own. There are a lot of options here.
But will everything run on Linux? No. You almost certainly will need some kind of Windows presence, although I do business with some companies who absolutely, positively want nothing to do with Windows (and Microsoft). But thats not typical. Most of us in IT resign ourselves to doing at least a little business with Microsoft. Microsoft, however, has shown itself to be the boa constrictor of software companies. They never stop squeezing, especially when they know they have your critical applications. The hardware independence story is good, but Microsoft substitutes software dependence. Proceed with caution.
And the proprietary Unix flavors HP-UX, Solaris, AIX remain competitive for large scale ERP and other mission critical enterprise applications. Heck, even the mainframe lives on. But the principle here is that even if you choose an operating system that only runs on one vendors hardware, you can at least mitigate the risk by choosing a DBMS and applications that can be transferred to another hardware and OS if necessary.
An Open Foundation
Having been immersed in networked storage for two brutal years, Ive had a lot of time to think about infrastructure architecture. The first lesson is that storage (along with networking) is the foundation of an IT architecture. So it stands to reason that an infrastructure thats built to last will begin with external storage, ideally from a company dedicated to storage with as much heterogeneous operating system support as possible. There are several companies that fit that description, among them Hitachi Data Systems, Network Appliance and EMC.
What you get from an external storage platform that supports multiple operating systems is the ability to change vendors, hosts and operating systems with a minimum of fuss. Yes, a migration of any kind is not without its pain, but its a lot more painful when all hardware and software is tied to one vendor. Thats a lesson everyone reading this should have learned by now.
Furthermore, these independent storage players have extensive expertise in supporting multiple platforms, including migrating customers from one to another. And frankly, unless youre contemplating a non-mainstream operating system, networked storage is an excellent investment because it is a best practice for storage vendors to provide ongoing support for any operating system with critical mass. For example, any HP 3000 users who are currently running on EMC Symmetrix will have no problem using that same storage on HP-UX, Solaris, Linux, Wintel and many others. If youre running on internal disk, youre stuck with HP-UX, best case not that theres anything wrong with that.
The Best Offense is a Good Defense
Here are some quick guidelines that recap the concept of minimizing transfer costs:
Start with a networked storage platform that supports as many operating systems as possible. This is the foundation layer for your IT infrastructure.
The best Total Cost of Ownership in IT is based on consolidation. However, that doesnt necessarily imply homogeneity. Its a matter of degree. Its a matter of physical location as well.
Software drives hardware. Choose DBMS, applications, tools based on support for multiple operating systems and hardware. Be cautious regarding any decision that locks you into one vendor. For example, SQL Server based solutions, which only run on Wintel, will have higher transfer costs than Oracle or Sybase solutions.
Keep your vendors honest, but at the same time dont underestimate the value of a true partnership. One company I consulted for dropped HP after learning that HP felt they owned them. Any time one side thinks they have the other over a barrel, theres bound to be trouble. Were all in this together.
The Glue That Holds It All Together You
In the new, defensive, minimum transfer cost environment, IT departments take on the role of systems integrator. Thats the catch for designing maximum flexibility into your environment. The IT staff must make everything work together, and be prepared to shift gears at a moments notice. To me, thats the silver lining to this otherwise dreary story of no loyalty and diminishing options. More than ever, its the people who make the difference.
Back in the day, hardware was expensive and people were not. Today, relatively speaking, the hardware is cheap and the people (and software) are expensive. Dont let the current dot-com meltdown cloud the issue. Yes, a lot of IT people have been laid off. But that doesnt mean that IT runs on autopilot.
Nobody knows better than HP 3000 system managers what its like to run 24x7x365. So nobody knows better what a challenge it will be to uphold that lofty standard in a world where there is no HP 3000. Perhaps the greatest legacy of the HP 3000, and what will ensure our continued leadership in IT, is the hard-earned knowledge of whats a best practice and what is not.
Scott Hirsh (firstname.lastname@example.org) former chairman of the SYSMAN Special Interest Group, is an HP Certified HP 3000 System Manager and founder of Automated Computing Environments (925.962.0346), HP-certified OpenView Consultants who consult on OpenView, Maestro, Sys*Admiral and other general HP e3000 and HP 9000 automation and administration practices.
Copyright The 3000 NewsWire. All rights reserved.