MB Foster logo
Click here for MB Foster Associates sponsor message
Leading the
IA-64 Evolution
for HP 3000s

Winston Prather must feel like he’s experiencing deja-vu all over again. The head of R&D for the HP 3000 Commercial Systems Division (CSY) was working in HP’s labs during the company’s last push toward a new 3000 RISC architecture. That Spectrum project of the 1980s yielded today’s 900 Series HP 3000s and MPE/iX. Now Prather has been given the all-clear to make the 3000 ready for another new chip architecture, the IA-64 line jointly designed by HP and Intel. Prather gave us the technical plans for the project transition, as well as notes on how moving to IA-64 will compare to Spectrum – the last successful migration for HP 3000s.

Does it feel like you’re approaching the same kind of project that HP pulled off in the 1980s?

It’s very similar, in that it’s moving from one architecture to another. One thing that makes me feel good about it is that it’s something we’ve done before. If you go back to the Spectrum program 10 years ago, it was a major unknown for HP or any other vendor. I think we pulled it off pretty successfully, and we learned quite a bit. We’ll use some of the same learning and techniques as we move to the new architecture.

That’s one of the things that’s given HP the confidence that we have. We can say not only “been there, done that,” but “designed the technology underneath it.” Comparing our position to where we think the competition will be, they will not have been able to say they’ve been there, done that. And they don’t have the advantage of working with Intel for the past five years.

When you’re thinking about competition, are you thinking of other people that are moving to Merced, like Sun?

That was my point – not specific in a 3000 or 9000 competition, but Hewlett Packard compared to Sun or Digital. We clearly feel we have a leg up on the competition.

Will the compilers and software be doing a lot more work than they were in the previous architecture?

From a technology point of view that compiler technology has really taken another leap forward. The idea that we’re going to be reliant on the compilers is still the same – although the techniques and the complexity have grown. When you look at some of the dynamic code optimization that the compilers are going to do, it’s even more impressive.

Will the HP 3000 customers have an experience like in the late 1980s, where they had an MPE V-Classic group of programs, and MPE/XL programs for the RISC systems – and they could interoperate between the two programs, so long as they were willing to go to an equivalent of a Compatibility Mode on the new architecture?

Yes, that’s the goal. The 3000 customers who experienced the move from Classic to XL know exactly what they’ll be looking at as they move forward. There will be the same kinds of concepts: Native Mode IA-64 compilers and object code, and PA-RISC compilers and object code, and translators. Our customers that have gone through this will understand exactly what this transition will feel like.

Is there anything radically different in the strategy to move from one architecture to the next compared to the last time around?

One new concept is that when we moved from Classic to XL, the translators were static translators: you basically took an old program and you ran it through a program, and it translated or emitted PA-RISC code. One of the concepts you will see when we move to the new architecture is that we’re going to provide dynamic translation, where you won’t have to run it through an object code translator and it will spit out code that you then execute natively. Instead, we’ll just trap and on the fly do that translation for you.

One of the concepts that’s being explored is the dynamic translation, which means you wouldn’t have to do any of that. You could just put on the same program and say go, and we will catch it and translate it and optimize it dynamically.

You only do that translation once?

I’m not sure how much of the translation is saved away for future use and how much can be done quick enough that the saving isn’t necessary. They can use run-time knowledge of how the program executes to change the program, and then make it execute faster. This does bring some challenges – for example, debugging. We’re exploring all sorts of new techniques for that. That’s one of the more exciting technologies – the concept of dynamic translation.

To look at the scope of the CSY commitment to IA-64, are there not only extra resources inside CSY but also concurrent resources inside HP’s Computer Languages Lab?

The decision to put MPE on IA-64 is a company decision, not something one division can do by itself. We already leverage a lot of the activities throughout the Enterprise Systems Group [ESG, which handles the HP-UX and NetServer servers as well as the 3000]. This is clearly an ESG commitment, and that’s why it was very important that all of the organizations were behind this.

What do you know today about the timing of all this? How long should customers should look for it after 2000?

We really don’t have a timing right now. I don’t think customers are going to say, when? It goes back to my question about 30 percent performance. What they care about is that we’re going to deliver the performance. We have some flexibility in different ways that we can deliver that. For example, PA is going to be around for quite some time. We’ll continue to push the performance that way. At some point in that overlap period, IA-64 will be available, years before it’s needed for performance. That gives customers a window of staying on PA to get 30 percent more performance a year, or they can move to IA-64.

Do you have plans yet to be in on the IA-64 Developer’s Symposium at HP World?

We’re scrambling right now to figure out if we can integrate into what they had already planned. That one is still open. It’s not real clear, but if at all possible we need to be part of that.

Was there anything that happened on the technical level that made it easier to decide to commit to doing this for the 3000?

I don’t think there was any one technical thing, an “a-ha” that gave us a breakthrough and made it easier than we first thought. That wasn’t really the thinking. For me, what this decision means to our customers is that it once again shows HP’s commitment to the 3000’s future. It’s more of a customer satisfaction and commitment to customer decision, as opposed to a technology breakthrough.

We’ve had our engineers working on IA-64 since the creation of it, working with Intel and working with the team creating the architecture, in anticipation of this type of migration. From a technical perspective, we’ve been working with them to understand what it would take – what we’d have to do, what it means to compilers, and what it means to the operating system – for quite a while.

Are customers going to be looking at a world where some of their applications are going to be built for 32-bits, and others are built for 64-bits? How will they make those two kinds of applications interoperate?

We still do have some technical decisions to make as we move forward, about trying to ensure that customers don’t have to deal with multiple binaries of the operating system. Our goal is to make it not noticeable to customers. They should not have to care. They get a tape and they just install it. If we do get to a point where we do need multiple binaries, then they won’t notice it.

There was that kind of a fork back in 1987, because you had MPE V and MPE XL.

Right. Even before this new architecture, with large memory and large files and 64-bitness, we were really committed to minimizing that impact that customers see. When we move to the new architecture, it will be even harder to minimize that impact. The probability that they will see multiple binaries goes up. When you’re on a PA box, we want to avoid customers having to deal with multiple binaries.

Your reference to PA box indicates there are going to be two different kinds of 3000s by then?

Our plan in moving to IA-64 will be very similar to the Unix side of the camp. For example, there will be a complete new platform available prior to IA-64. That would be a box upgrade for the high-end and midrange systems. That new platform will be able to run PA processors, and you can take out those PA processors and plug in IA-64 processors. So that the transition, if you will, from a PA platform to an IA-64 platform is just a board upgrade.

The roadmap to the future for the 3000 customers will look something like when they need the performance power, they would upgrade either their midrange line or their high-end line to their new box, and run PA on it. They could continue to upgrade those boxes with PA processors, or at some time when they’re ready, they could then pull out their PA processor boards and put in IA-64 processor boards.

So at that moment in time they wouldn’t even have to change to another version of the operating system?

That, I’m sure, will have to happen. It’s a question of how non-intrusive it will be. I can pretty much guarantee that you would have to do some sort of upgrade of the operating system. There will clearly be another version of the operating system, like moving from Release X to Release Y to Release Z. The transition would look like “Go to Release Z. Swap processor boards. Reboot. There you are.” Very similar to what you did when you moved to Spectrum.

The major transition isn’t plugging in the processor board, but getting up on the next operating system?

Replacing the operating system is it, and making it easier than we did before. When you moved from the 70s to the Series 930, that was a complete box swap, which made it a larger process. This would be just a board swap.

Are the people making MPE/iX applications going to have help soon in doing the work on IA-64 transitions? How much change are they going to experience under the hood in moving applications?

Remember, they wouldn’t necessarily need to do anything. But they’ll want to, in order to take full advantage of the new architecture. We’ve already started the process of working with the applications providers, helping them understand what the transition path would look like and when they would need to get assistance, and what kind of assistance do they want. We’ve started all that.

Do you think you want to go the route you did in the Spectrum project, and set up Technical Assistance Centers?

That’s still to be determined. We need to continue to talk to [developers] and see whether that’s the right idea, or whether we could do it another way.

Well, some of the fundamental assumptions of working with computers have changed a bit since then. For one thing, the hardware is going to more affordable than the new hardware was in 1986.

Right. One of the reason why we went to the Access Centers before was to share the hardware. It’s probably much less required. But that’s something we’ll have to figure out as we move forward. Right now we just had a big reseller meeting in Venice, where we told the resellers of what we were doing and started working even more closely with them on what it means to them and how we will work with them to move forward.

What does the picture look like for an application provider who’s not doing work in Unix right now? Will the migration be easier for people with some kind of HP-UX version of their program, because the tools and programs have been in place a little longer for HP-UX?

If there’s any advantage that they would have, it would be that they’d already gone through a little of the process on the Unix side. But I don’t know that there’s any other difference.

They will have been through the process, but a lot of our application providers have been around a long time, and they’re already going to know what the process looks like – because they’ve already done it once. It will be very simple to them.

Do you think there’s any advantage the MPE-only software shop would have in trying to make this transition?

I don’t know that there’s an advantage. A generic advantage that any vendor has is only working with a limited number of binaries. If you ship on 13 different operating systems, then that’s 13 different migrations.

I’m just trying to figure out if it would be easier than coming from a Unix perspective.

I’m not sure. It wouldn’t surprise me.

Can you talk about anybody who’s participating actively in this kind of migration now?

I really can’t yet, because I don’t think we’re at the point where I personally know which vendors are allowing us to show their commitment. I don’t know what’s public and what’s not, so I’d rather not comment. I can say we’re working with many of them.

Is it your feeling that the 3000 customer base is going to have a lot more homegrown applications they will be moving across than a customer base in another environment moving to IA-64? Will you be tuning the migration program and tools more toward people that have homegrown things?

I think that’s something we should look into. I don’t know that we’re far enough along to have that information. I agree that the 3000 installed base has a higher percentage of homegrown applications. I think your question is “Will they need access to a set of tools that may not normally be available to end-user customers?” That’s a good question, but I think we’re a little early.

What do the customers have to do to get this installed? Make sure they have budget in place?

I think the customers shouldn’t focus on the technology. They should focus that this is one of the ways that Hewlett Packard is going to ensure we deliver on the performance commitment that we make. We’re promising 30 percent performance increase per year, and one way to get that is going to be PA for quite some time, and then there will a transitionary overlap period where they’ll move to the new architecture, and that will require some recompilation if they want to achieve maximum performance. But the transition should be as smooth as the one they’ve done in the past.

The reaction I think customers are going to have to this announcement is not a technology reaction, it’s going to be a confidence, commitment kind of reaction. I think they’re going to say great, glad to hear it, it makes me feel good about my current and future purchases.

We’ve been trying to shift away from focusing on technology. We’re focused on delivering the performance and functionality they need. And we’ll use whatever technology we need. We did the same thing with 64-bitness. Originally we said we don’t see a technology reason to move there. As the reason developed – customers needed large memory for performance, large files for storage – then we decided to use 64-bit types of technology. It’s the same thing here. You need performance, and the performance is going to scale, and it’s up to us to make sure we give you that performance. One way to do it is new architecture. There will the technologists that want to know about speculation, prediction techniques and the compilers, the techniques that the compilers use. But I don’t think that’s the normal reaction.

Does it go without saying that IMAGE is going to move all the way?

We’ll have to move the databases.

Is there anything you know of right now that won’t make the cut which has a significant customer base?

Not really. I really believe the best way to think about this is been there, done that once, gonna do it again. From the compiler point of view, technology’s come a long way and that’s going to make it even easier.

So you’re moving to the third distinct generation of HP 3000?

Absolutely. Not many computers can say they’ve been through that. It’s interesting when you look at Hewlett-Packard and IA-64 in general, and how well we are positioned within the industry. It’s going to interesting, because IBM is the only major vendor that hasn’t shown a commitment to IA-64. Even though Sun is going to waffle right now and say they’re IA-64 but they’re also SPARC, c’mon – really? It’s just a matter of time. And the same with DEC; although they talk about Alpha, they made a commitment to IA-64 from the NT perspective. How long can all of the other vendors producing chips do it? They’re not going to have the volume. When you look at the volume Intel is going to have with the new chip set compared to some of these other platforms, it’s going to be interesting to see how anyone can make inroads.

Did the delay of Merced have any impact on what CSY is doing with IA-64?

It really doesn’t impact us at all. Remember, our commitment is to the performance levels, and we’re still committed to those both on the 9000 side and the 3000 side. We have PA plans to ensure we’re going to get those performance levels. Then IA-64 is an evolutionary thing that will start sometime during that overlap. It really wasn’t a big deal for us. We’ll still be able to deliver our performance levels. The whole show wasn’t bet on Intel’s schedules.

It appears that the NT and Unix solutions from HP could really have used the IA-64 horsepower to meet performance goals, much more so than the HP 3000. Is that so?

Even on the NT and Unix side, we still have plans in place to deliver the performance. I don’t think it created that type of a problem for them, either. The customers that tend to buy Unix tend to be more technology focused, so they’re more interested in the architectures and the whole IA-64 thing. On the 9000 side we are in no way at risk of not delivering the performance we need because of Intel’s slip. From a marketing perspective, on the Unix side they market that technology a lot more. That’s what those customers want to hear, leading bleeding edge. So it’s put more pressure on them from that perspective.
From our side we already moved two or three years ago away from pushing technology. Our customers aren’t as hyped about it. Our public announcement of IA-64 is going to be more of a confidence announcement than a product message.

Have you run any MPE/iX programs on IA-64 simulators yet?

On some of the simulators, yes.

How does it feel to be going through the second evolution of HP 3000?

The magnitude of the task feels like a similar process with more complexity in the compilers. From a personal perspective it feels very similar. I had one of the project managers tell me something when we finally decided were going to do this. “One of the most exciting times of my entire life was when we went through that Spectrum period,” he said. “And here we go again. I’m ready for it.” There’s that level of excitement from the engineers. Personally, it feels really great. The lab is gearing up for a number of years and a lot of fun.

One of the things I have to keep the lab focused on is that we have a tremendous number of things on our plate right now that will come long before IA-64. We have a ton of performance work that’s going on, another platform and more processors. All of that is the building blocks leading toward plugging in that IA-64 processor. If you would look at what the lab is working on right now, from the growth perspective, I summarize it as getting the operating system out of the way of all of the performance that’s going to come from the hardware. If you recall the one slide I flashed at the IPROF talk this year, that’s a lot of performance. The operating system has a lot of scalability work that’s got to be done, work on limits. Those are the kinds of things that are the building blocks.

That work is going to done at first to benefit from the PA-RISC 2.0 horsepower, right?

The core design of the operating system will be the same as we move to IA-64. A lot of the code will be the same, just recompiled. The higher-level parts of the operating system will almost – it’s incredibly oversimplifying, but you can think of it as we’re going to do the same thing we ask our customers to do. We’re going to recompile the operating system. The generic algorithms that the Dispatcher uses and the Memory manager uses will be recompiled. All of the kernel will have to be recompiled and then modified slightly. All of the building-block work will leverage straight through to the version of the operating system that supports IA-64.

So the 64-bit work you’re doing for PA-RISC will dovetail with IA-64 work?

Absolutely. It’s all evolutionary.

One which end do you first see IA-64 first becoming available for 3000 customers?

That’s still to be determined. If I had to guess, I would say high-end, because the biggest benefit would be that continued growth curve. The primary objective would be for that high-end growth, although the midrange and low end would be able to take advantage of it too, with different chips as they come available.

If you’re going to get ready to accommodate IA-64 in your 3000 shop, do you need to plan a couple of expenditures: one to get the system that can accept the new chips, and another to buy the processor boards themselves?

What the customer should be planning on is affording the performance they need. The fact that it comes from a box or a board upgrade is not really as relevant. When you look at from the pricing point of view – and it’s obviously too early to say – I would assume it will be the same kind of price for performance.

Are the technical similarities between IA-64 platforms going to help deliver more applications to the 3000?

I would say they make it easier. I wouldn’t say it’s a slam dunk, so you can take any application from a Unix vendor. There’s a database issue and an intrinsic issue, and then there’s the bigger issue – with a lot of the application providers, it’s the number of binaries they have to support. The front of the compilers would look exactly the same. It clearly would help.

Have you examined if Java can be a good lever between IA-64 platforms?

We’re not really sure. Java can be leveraged across any architecture. That’s the whole concept, that it doesn’t matter. Because of the compiler technology that we’ll have, I think that the Java virtual machine executing on an HP IA-64 platform will outperform other Java Virtual Machines.

Winston Prather

R&D Manager

HP Commerical
Systems Divison

Copyright 1998, The 3000 NewsWire. All rights reserved.

Copyright 1998 The 3000 NewsWire. All rights reserved