| Front Page | News Headlines | Technical Headlines | Planning Features | Advanced Search |
Genisys Sponsor Message

August 2004

Migration Blues: Coding for Conversion

Make changes, even in migration's shadow, without breaking what still works

By Roy Brown

Earlier this year, I was making non-trivial changes to an HP 3000 system with several databases and several hundred programs with a further design life of less than one month.

I was making these changes purely to facilitate the simplification of the data structures in, and the extraction of data from, this system so that it could be provided to the folks who were going to migrate it onto an existing Unix/Oracle system, with a deadline of April 1st for the go-live.

This was a pre-existing system, owned by the people doing the migration, with existing customers, that handles the business area in question. There were one or two extensions and customizations for our specific needs. They ran it sort-of-externally on a sort-of-FM basis. I’ve no doctrinal problems with any of that.

But the HP 3000 system still had to keep up its customary reliability and availability, and it had to do about $(umpty-ump) millions of invoicing at the March month-end, which also happens to be our fiscal year-end. And for some inexplicable reason, the financial folks didn’t want to miss any of this billing, and it can’t be late, either.

I had Adager to help me in my task, but nothing special beyond that.

This application is not expected to break due to my ministrations. How did I achieve this? Read on to see.

Code Completeness

I took a copy of the application, and its live data, onto a 928 I can use for testing, etc. This was before I made any changes at all. I also put all the sources I thought I needed — COBOL, forms files, UDALink, Formation templates, etc., — on the 928.

I then deleted all the executables — program code, XL, forms files, Formation templates, etc. — everything.

Then I recompiled the entire app, and resurrected all its components for the sources I had available. Then the users and I tried to run it.

We found (by testing) one COBOL module missing, uncompiled. It turned out the source was living in a ‘fixes’ source group, as we thought it was just part of a one-time fix, but it was actually part of the live system. I moved this source over to its rightful place.

We found (by inspection) three sources with different names that compiled to the same executable — an old one, the live one, and an abandoned upgrade to that. We deleted the two unwanted ones (guess which).

Everything else was good and up to date — though it is easy to posit problems like having an old source which makes it look as if you have a certain module, but what you have is not as up-to-date as the module you were using live. I’ve seen that before, though not on this system.

Bottom Line? Build what you’ve got, and compare with the live system now. This should tell you what, if anything is ‘too old,’ ‘too new,’ or ‘missing.’

Deal with that and you have a solid base from which to upgrade.

Change Reliability — Code

All our dataset definitions are in our Copy Library, and it is a standard that copies of datasets, for files or whatever, when used in programs, derive from these, possibly with prefix substitution.

It is also a standard that records for these datasets are created with an INITIALISE xxx-DATA-RECORD.

So I can:

1. Change the definition of a dataset in one place;

2. Recompile the entire system. I have a job that dynamically discovers the contents of the SOURCE group, and compiles each one, putting it in the XL if it declines to succeed in compiling as a stand-alone program. I’m still looking for a way to tell, reliably and automatically, if a chunk of object code is a program or a subprogram, ansisub or dynamic, so I know how to link it. Best I’ve found so far is to try to link it as a program, and if that fails, link it into the XL. Can’t even do it the other way round, as that doesn’t fail;

3. Know that all new records created will at the very least have all the new fields in, initialised to spaces or zeroes as appropriate;

4. Know that all new and old records will be correctly treated as regards alignment.

Change Reliability — Database

All this can only happen, of course, once I’ve updated the database definitions in Adager. I do this using Adager’s jobstream building option, so that by the time I’ve got a procedure that works on the test database, I have a job that is guaranteed to carry out the exact same procedure on the live database. And fast, of course, which is what you need, as well as reliable, when trying to get the production system back on line as quickly as possible.

I say I use Adager’s jobstream building option, and I do, but sometimes I just regard Adager as a scripting language, and write the ‘code’ to make further changes to my databases directly by hand. And then test the results.

Development

Once I’ve got an updated test system where I know the code is complete, and has been updated to at least know about and respect the new extended dataset definitions in the Adagered database, then I can go on to implement the changes in functionality that are desired, program by program. How you do this is up to you.

This all started when an HP 3000 developer asked me about making several non-trivial changes (additions) in functionality to a legacy application. Some of his management felt the 3000 app has an expected life of five years, and were reluctant to change the physical design of several critical datasets. They proposed to graft on a Detail dataset with the required new fields although the fields belong upon the Master that they are referenced to. But I think that if he does the above process, it would be much easier, and much more reliable, for the new fields to be placed where they rightfully belong. He wanted to extend some master sets. His management wanted the new stuff separate in a detail set or sets.

This would then mean that, for most of his programs, the new fields would be served up, and stored away again if updated, by the existing Image accesses these programs are already making. So ‘all’ he would have to worry about is the code that validates, maintains, and uses these fields.

Put them in a separate detail dataset instead (as ‘they’ propose, whoever ‘they’ are), and everywhere you want this stuff, you now have to write new IMAGE accesses to go get them, and to put them back.

That would be my defense for doing it ‘properly’ anyway. Solid concrete and aesthetically pleasing.

Anyway, you ought to do the ‘complete code’ check, even if you find a couple of modules missing that you have to rewrite from scratch. Otherwise you’ll spend five years tiptoeing round holes that might or might not be there. And what’s the betting that they’ll be in the stuff you have to rewrite for the new functionality, anyway?

Doing the above, I’m afraid I don’t have any horror stories for you. Not from the HP 3000 side, certainly. Likewise, no cost overruns, doing it properly. And there have been no missed deadlines on the HP 3000-sourced material for the conversion; so far, all HP 3000-sourced contributions have been on time or sooner.

Conclusions

It’s a paradox, perhaps, that an HP 3000 system that was last updated in 2002, and has been almost untouched for the whole of 2003 should, with its imminent demise, see almost more development activity in its last three months of life than it has seen in the last two years.

I wonder if this is a feature of migrations, or just peculiar to us?

Anyway, I guess that even if you are not considering migration, the same sort of considerations will apply to existing apps: Not so much, “Is it worth upgrading them if they only have a short life left, and how do you do it safely?” Rather, “If you want to homestead them, how do you ensure they are complete, and packaged up nicely for that purpose?”

I have a couple more verses to sing for my “Migration Blues.” Verse Two is “Hell, we can’t support that.” This is how to tell, when migrating to a different system or platform, whether the thing the vendor wants you to give up on is an implementation quirk of your old system, or essential business functionality you can’t afford to lose. And how to square the circle, maybe.

Verse Three is “Hey, look at this great stuff I can do with MS Office,” or data manipulation off the HP 3000. There’s a lot of data manipulation in migration preparation, and you want to strike a nice balance between repeatability and having to create extensive special programs — and Office can help. I think of this as “Fun with Vegetables.”

Roy Brown is CEO of Kelmscott Ltd., a developer and consultant with more than 25 years’ experience on MPE and the HP 3000.


Copyright The 3000 NewsWire. All rights reserved.