| Front Page | News Headlines | Technical Headlines | Planning Features | Advanced Search |
  Epic Systems Sponsor Message

July 2002

net.digest summarizes helpful technical discussions on the comp.sys.hp.mpe Internet newsgroup and 3000-L mailing list. Advice here is offered on a best-effort, Good Samaritan basis. Test these concepts for yourself before applying them to your HP 3000s.

Edited by John Burke

Now that HP merger is official and the execution of it is taking place, with each side anxiously waiting around hoping to say “I told you so,” attention has turned to the silence coming from what remains of HP’s 3000 division, CSY. In case you missed last month’s front page article, CSY no longer exists as a division. The former GM and author of the end-of-life (EOL) strategy for the HP 3000, Winston Prather, has gotten his reward for work done on the merger plan and has flown off to greener pastures within HP. As I write it is over seven months since HP’s 11/14 EOL announcement for the HP 3000 — and HP has yet to offer anything beyond some version of “we are looking at all the options.”

Instead, we have yet another survey, this one under the auspices of OpenMPE, Inc. I fully understand that OpenMPE has little leverage with HP and has to play nice, at least officially. However, the tone of messages on 3000-L suggests things may get very ugly at HP World if HP does not make some positive movement on the many open homesteading issues that have been raised. Note that some of these issues are important not just to homesteaders or those wanting a future for MPE beyond HP, but also to those customers expecting to migrate, but not expecting to be finished before October of next year.

In the realm of the off-topic and way off-topic, postings to the Internet treated us to the story of the Edmund Fitzgerald wreck, a discussion about tri-state logic, a long, often intemperate thread supposedly about how to stop suicide bombers, and the anniversary of the Battle of Midway. Also, pictures of the Milky Way “shredding” another galaxy, Malden Mills and enlightened management, the IT labor “shortage” and H-1B visas. Oh, yes, and a thread about what the average CE should know about the HP 3000, and the depressing news that few seem to know anything. An average of 50 off-topic postings per day! Even with all of this “noise” there were still many hundreds of postings where people graciously shared technical information.

As always, I would like to hear from readers of net.digest and Hidden Value. Even negative comments are welcome. If you think I’m full of it or goofed, or a horse’s behind, let me know. If something from these columns helped you, let me know. You can reach me at john@burke-consulting.com.

Correction to my iSeries report

One of the issues that came up during the recent IBM iSeries Technology Forum that I reported on last month was backup technology and functionality. Several of us were not completely sure we understood what the iSeries offered and were having difficulty relating it to our MPE experience. I tried to focus on this and ended up giving the iSeries low marks in this area. As a courtesy, I sent IBM a copy of my report. They objected to my characterizations of the iSeries backup capabilities, so I set out to try again to understand what the iSeries really offered.

Upon reflection, it occurred to me that perhaps there was a terminology disconnect in our questions and IBM’s answers, so I phrased my questions in a different context comparing the iSeries and DB2 with Oracle’s online backup capability in the first case. And then, in the second case, asking about equivalents to HP’s or Orbit’s online backup capabilities. And, finally, asking about an equivalent to MPE’s online SLT creation.

Doug Mack of DB2 UDB for the iSeries Product Marketing, replied to my questions: “OS/400 (and hence, DB2 UDB for iSeries) offers a Save While Active capability which allows for online backups. OS/400 also includes a journaling (transaction logging) function. The journal receivers (the objects that contain the actual “log entries”) can be saved while journaling is proceeding with no performance degradation. Later the user is able to restore the backup and the receivers and do point in time recovery from the journal by applying forward to whatever point you deem appropriate. (As a point of interest, you can also use the journal to remove journal entries so you can recover back to a previous time without having to restore.) REMOTE JOURNALING is also supported such that the transaction logs (journal receivers) can actually be stored on a second system or second logical partition, and backups can take place from that second machine.

“For most user objects, including all user libraries, you can use Save While Active. However, full system backups where you are backing up system microcode and OS/400 system objects require a dedicated, quiesced system. Mirrored systems through High Availability software provide the ability to do full system back up ‘online.’

“A typical back up scenario consists of a monthly ‘full system backup’ where you quiesce the system and include microcode and OS objects in that backup (we’ll call these SYSTEM objects). Typically, the only time you would need to do a quiesced kind of backup outside of the normal monthly full system backup would be if you added on microcode fixes, put on a new OS release, or install a new licensed program. In most these cases you are powering down the system anyway to achieve the change and this is usually planned maintenance or in the case of microcode fixes something atypical (but it does happen).

“Software configuration changes (e.g., TCP/IP addresses, new tape device added) are considered USER objects and would be picked up by normal ‘save changed objects’ processes without the need for the system to be quiesced.

“As for hardware configuration changes, you can add disk live without requiring a shutdown. The system will start using the disk and auto re-balances data across all the disks. No backup specific issues here. If you add memory or do an upgrade, you do need to bring the system down, and it’s a good idea to have a full system backup before making this change, but that could be a combination of a full system backup plus daily changes captured since the full backup.”

After that extensive explanation, what this means to me is that iSeries user objects, including of course any DB2 databases, can be backed up online. However, the iSeries has nothing equivalent to MPE’s SLT, which can be created online and then used to re-install the system.

A bug, a feature, or something else?

The writer to 3000-L thinks the following is a bug and should be fixed. “If you do a DBFIND and chained reads (dbget mode 5) on a master using an index, then if you do a regular DBFIND on a detail under the master, the chained read on the master loses its place. You don’t get any special error codes, just end-of-chain on the next read on the master. I know workarounds (like a second DBOPEN), but I personally think it should be fixed, because it’s a trap.”

Tien-You Chen of the IMAGE lab, while not claiming this is a feature, suggested that the second DBOPEN workaround is the only workable solution. “The structure behind DBFIND and chain-get is a linkage of master dataset and detail dataset connected by the pair key item/search item. The B-tree can only be added to the key item of the master dataset, then use this link to “B-tree” access the entries in detail dataset (i.e. super-chain get). The B-tree DBFIND and chain-get to the master dataset is considered a special case of the super-chain concept, where we don’t access the detail dataset.

“So, in the case of doing a B-tree DBFIND then chain-get the master entries, then in between, you do another non-B-tree DBFIND to the detail dataset with the same search item. IMAGE resets several flags in the internal data structure to denote this non-B-tree access. That’s why when you go back to access the master dataset, IMAGE returns nothing, because IMAGE doesn’t think it is a super-chain get anymore.

“I feel it is much better and cleaner to have another DBOPEN to handle the get/chain-get to the same pair of master/detail.”

Be honest now, did you know this?

An MPE guru (who shall remain nameless) posted the following several months ago: “I probably heard about this enhancement at some point but must never have internalized it sufficiently. Looking at :HELP SHOWVAR I see that :SHOWVAR now supports a ‘;JOB=’ parameter, meaning you can (with SM capability) view the user variables of another job or session.

“I can think of lots of interesting uses for this in monitoring the progress of jobs and so forth. Many environments already have a JOBSTEP variable of some sort that could be interrogated to determine what step the job is on, and it would be easy to stick a few ‘progress’ variable :SETVARs into long running programs that would let you see how far along they are.

:showvar stepnum;job=#j2
STEPNUM = 10
:

“The current implementation only lets you see ‘user’ variables and not ‘HP’ variables (but a few things that you might think are ‘HP’ variables like ‘HPSTDIN_NETWORK_ADDR’ turn out to be user variables). I suspect we have Jeff Vance to thank for this nifty feature.”

As someone else commented, this is “way cool.” Indeed it is. But this same person went on to question why only “user” variables are accessible. It turns out — as explained by Gavin Scott and confirmed by the author of the “;job=” parameter and lots more cool stuff Jeff Vance — that “the user variables are certainly literal data stored in a data structure which can be hunted down and read from another session. Most/all of the (real) ‘HP’ variables are ‘virtual’ variables (HPCPUSECS for example) where what’s probably stored in the ‘variable table’ is the address of the function to be called to calculate and return the current value.

“So, getting at virtual ‘HP’ variables from another session would require calling the appropriate function from the context of the remote session which is hard, if not impossible, to do.”

An addition to my iSeries report

It turns out IBM is doing something with the iSeries somewhat analogous to HP’s dumbing down of the A-Class systems for use by MPE. In case you are not aware, the MPE versions of the A-Class operate at only one-fourth the speed of their HP-UX counterparts (i.e., there is no 110-mhz processor, there is just a 440-mhz processor running at one-fourth of its full speed). This was done entirely for marketing purposes to keep the A-Class from cannibalizing sales of the N-Class. I happen to think it was one of the dumber things CSY did. Apparently HP does not hold a patent on such moves.

The iSeries (using the OS/400 operating system) shares the same processors as the pSeries (using AIX) – just as the MPE A-Class uses the exact same processors as the HP-UX A-Class. When you buy a pSeries, you buy a certain amount of processing power and can do anything you please with it. Not so the iSeries. IBM uses a software governor to limit the amount of interactive processing (they call it “5250 green screen” processing) you can do without paying (a lot) more. In theory, this was so they could introduce a “competitively priced” low-end server for things like web serving and as a Lotus Domino server.

When you are using an iSeries for these tasks you have the full power of the processor available. In practice, it has the effect of obscuring the true price you are going to have to pay to do a certain amount of work. This is particularly important for those people contemplating moving their VPlus/COBOL MPE applications to the iSeries. So while I was generally positive about the iSeries in my page one article last month, caveat emptor.

A test server on the cheap

Suppose you’ve decided to investigate HP-UX or Linux. Think used. Technology turns over so rapidly now you can get great deals on machines that were at the top of the heap just two years ago. Do you really need more for a proof of concept?

The dot bombs have created a glut of used servers, particularly in the Intel space. True, you can run Linux on almost any piece of crap PC. For example, I’ve got RedHat 7.1 running quite nicely on an old P133 that could barely run Windows 95. But if you are serious about investigating Linux, you’ll want a server-class machine with multiple processors and SCSI RAID, something that will cost in excess of $2,500 new. Next month I’ll tell about my experience creating a dual P500 Linux server with 9Gb of SCSI RAID 1 for less than $600. I have a spare motherboard and power supply to boot.

John Burke is the editor of the NewsWire’s HiddenValue and net.digest columns and has more than 20 years’ experience managing HP 3000s.


Copyright The 3000 NewsWire. All rights reserved.