| Front Page | News Headlines | Technical Headlines | Planning Features | Advanced Search |
  Priority Computers Sponsor Message

Net.digest summarizes helpful technical discussions on the HP 3000 Internet newsgroup and mailing list. Advice here is offered on a best-effort, Good Samaritan basis. Test these concepts for yourself before applying them to your HP 3000s.

Edited by John Burke

February was quite a month for off-topic and barely-on-topic threads. The normally high signal-to-noise ratio was not. Perhaps it was because the Solution Symposium and SIG3000 were both held in February. Perhaps it was due to a certain giddiness sinking in after we all survived Y2K and the January fallout. Whatever the cause, we were treated to lengthy, animated, threads about how copy/cut and paste and, similarly, how control-c/control v should behave in a terminal emulator. Another lengthy thread dealt with the renaming of the HP 3000 to the HP e3000, the general sense of which was that if it means more recognition for our platform of choice, then great.

A normally slow Monday was enlivened with a discussion of what an operator does and what he should be paid. Then there was the thread that delved into the history of the “Mystery Mansion” game on MPE. My personal favorite off-topic thread started out with an on-topic posting from Lars Appel about a Java GUI using Java Telnet to launch and control a Transact/iX program for accessing a TurboIMAGE database. It then morphed (don’t ask me how) into a long account from Wirt Atmar about Roswell, New Mexico in the 1970s, the urban legend that has grown out of it, and Wirt’s role.

I would like to hear from readers of this column and Hidden Value, which I also edit. Even negative comments are welcome. If you think I’m full of it or goofed, or a horse’s behind, let me know. If something from these columns helped you, let me know. If you’ve got an idea for something you think I missed, let me know. If you spot something on 3000-L and would like someone to elaborate on what was discussed, let me know. Are you seeing a pattern here? You can reach me at john.burke@paccoast.com or john_burke@pacbell.net.

At SIG3000 in February, someone did approach me and mention that he finds Hidden Value and net.digest useful. Frankly, it made my day. More importantly, I wanted to take this opportunity to tell you, as I told my “fan,” that we are compiling an indexed “Best of net.digest and Hidden Value” for publication later this year, covering all the columns since the very beginning. The 3000-L newsgroup/mailing list is an incredible resource for the undocumented or hard-to-find tips, the tricks that make our daily lives as system managers/administrators easier. Unfortunately, even if you follow the list regularly, finding and remembering the various nuggets six months later can be quite a challenge. Hopefully, this “Best Of” will get dog eared with use.

But I have 20 percent free space!

The question went something like this: We are trying to RESTORE a database on top of an exiting version of the database but the RESTORE fails with the error message

RESTORE ENCOUNTERED ERROR TRYING TO ALLOCATE DISC SPACE: SSM ERROR: -1 (S/R 1945)

COMDB04 .OBJECT .CAD NOT RESTORED: DISC ALLOCATION FAILURE

We have gotten around this problem by purging the entire COMDB database with DBUTIL and then restoring the entire database from tape. For what it is worth, DISCFREE C shows we have 20 percent free space before purging the database. What is happening here?

Ron Horner suggested what turned out to be the problem: RESTORE purges the old file only after completely restoring the new version.

Goetz Neumann and Bijo Kappen of HP elaborated:

RESTORE creates the file it is currently fetching from tape in the NEW domain. Thus it is invisible to LISTF, as it has not been inserted into the directory yet. It will only remove the original file from the permanent domain after it successfully gets all the data from the tape, then the NEW file is saved into the directory as the last step. This is necessary to avoid possible data loss if something prevents the file from restoring completely.

The sectors for the NEW file will be allocated on the volume set where the file belongs based on its volume set restrictions. Thus, on any RESTORE, you will need at minimum as much free permanent disk space on each volume set as the largest file on your backup tape for that volume set. If you do parallel tape device RESTOREs or use the interleave option you will need even more, space because RESTORE will be holding multiple NEW files open for a longer period of time.

[Editor’s note: In case you need one, this is a very good reason to be sure you check the results of every RESTORE very carefully. In a worst case scenario only some datasets will be restored, and you will be left with both a logically and physically corrupt database.]

Stick with MRS for backups

While we are on the subject of backups and tape, a user writes in: We have a DDS-3 drive that was installed last year. Prior to that, we had a DDS 1 drive that came stock with our 927LX. This morning I tried to :STORE to an old HP-cut SUBSYS tape, which is a 60-meter DDS tape. It was write enabled via the little white slider, but when the :STORE got underway, the message: 10:41/18/LDEV#7 WRITE NOT ENABLED appeared on the console. Does anyone know of any limitations in using a DDS-3 drive with 60-meter tapes, or is this just a bad tape?

The gist of the responses is that the Media Recognition System (MRS) gotcha probably bit the user. 90-meter DDS-1, DDS-2 and DDS-3 drives can read from and write to 60-meter tapes, provided the MRS requirements are met; i.e., either the tape uses the MRS or the drive has been configured (via a dip switch) to disregard MRS. Note that most drives come default configured to require MRS, so if you have an internal 90 meter DDS-1, DDS-2 or DDS-3 drive, chances are very good it requires MRS tapes, at least for writing.

So, how do you tell whether a tape uses the MRS? Some tapes will actually have printed on the case somewhere “Media Recognition System”, but this is not always true. The thing to look for is the 4-bar symbol (||||) on the case. This always means the tape uses the MRS. My advice is to discard any tapes that do not use MRS unless they are archive tapes you must keep for read purposes. Do you really want to play around with a potentially non-MRS tape when you are trying to take a memory dump or create a CSLT/SLT?

Okay, this probably appeared in some condensed form in a long ago Hidden Value column. What is different this time — and what we all need to be aware of — is the following information from Denys Beauchemin about the DDS-4 standard: “A DDS-4 drive will be able to read from and write to DDS-3 125-meter tapes and DDS-2 120-meter tapes. It will only be able to read from, not write to DDS-1 90-meter tapes. DDS-1 60-meter tapes cannot be read from or written to by a DDS-4 drive.”

This has nothing to do with MRS. But it has everything to do with the way HP currently distributes FOS, SUBSYS, POWERPATCH and PATCH tapes: on 60-meter DDS-1 tapes. I suspect that by the time the N-class machines come out, if not sooner, that internal tape drive will be DDS-4. Consider yourself warned.

How can I make CI variables behave more Unix-like?

CI variables — their capabilities and, usually, deficiencies — are often the subject of discussion on 3000-L. For those of us who grew up on plain old MPE, CI variables were a godsend. We were so caught up in the excitement of what we could do with CI variables and command files, it took most of us a while to realize the inadequacy of the implementation. For those coming to MPE/iX from a Unix perspective, CI variables seem woefully inadequate. There were two separate questions from people with such a Unix perspective that highlighted different “problems” with the implementation of CI variables.

But first, how do CI variables work in MPE/iX? Tom Emerson gave a good, concise explanation.

“SETVAR is the MPE/iX command for setting a job/session (local) variable. I use ‘local’ somewhat loosely here because these variables are ‘global’ to your entire job or session and, by extension, are automatically available to any sub-processes within your process tree. There are some more-or-less ‘global’ variables, better known as SYSTEM variables, such as HPSUSAN, HPCPU, etc.”

The first questioner was looking for something like user-defineable system variables that could be used to pass information among separate jobs/sessions. Unfortunately, no such animal exists. At least not yet, and probably not for some time if ever.

There is, however, a workaround in the form of UDCs created by John Krussel that implement system, account and user-level variables. The UDCs make use of the hierarchical file system (HFS) to create and maintain “variables.” Tim Ericson, who maintains a fine Web site with a huge collection of useful command files at www.denkor.com/hp3000/

command_files, pointed this out. For John Krussel’s UDCs and documentation plus additional handy CI scripts point your browser to jazz.external.hp.com/src/scripts.

The second questioner was looking for something comparable to shell variables which are not automatically available at all levels. You have to export shell variables for them to be available at lower levels. Thus, there is a certain locality to shell variables.

It was at this point that Jeff Vance, HP’s principal CI Architect at CSY noted that he had worked on a project to provide both true system-wide and local CI variables (in fact, the design was complete and some coding done). He had to abandon the effort to work on higher-priority projects. He does not know when, or if, he will be able to return to the project. In the meantime, Jeff offered this suggestion for achieving locality:

Variable names can be created using CI depth, PIN, etc. to try to create uniqueness. E.g.,

setvar foo!hppin value

setvar foo!hpcidepth value1

Mark Bixby noted that CI variables are always job/session in scope, while shell variables are local, but inherited by children if the variables have been exported. He suggested that if, working in the CI, some level of locality could be achieved by “making your CI script use unique variable names. If I’m writing a CI script called FOO, all of my variable references will start with FOO also, i.e.

SETVAR FOO_XX “temp value”

SETVAR FOO_YY “another value”

...

DELETEVAR FOO_@

“That way FOO’s variables won’t conflict with any variables in any parent scripts.”

Apparently, HP has a formally documented recommendation for creating “local-ness” which was shared by Erik Vistica and is reprinted here as follows:

From call-id A5052608:

MPE: How to create CI variables with local (command file) scope

Problem Description :I have separate command files that use the same variable names in them. If one of the command files calls the other, then they both affect the same global variable with undesirable results. Is there the concept of a CI variable with its scope local to the command file?

Solution: No. All user-defined CI variables have global (JOB/SESSION) scope. Some HP Defined CI variables (HPFILE, HPPIN, HPUSERCMDEPTH) return a different value depending on the context within the JOB/SESSION when they are called.

HPFILE returns the fully qualified filename of the command file.

HPPIN returns the PIN of the calling process.

HPUSERCMDEPTH returns the command file call nesting level.

To get the effect of local scope using global variables, you need a naming convention to prevent name collisions. There are several cases to consider.

Command file CMDA calls CMDB, both using varname VAR1.

• Use a hardcode prefix in each command file.

In CMDA use: SETVAR CMDA_VAR1 1

In CMDB use: SETVAR CMDB_VAR1 2

• Use HPFILE.

SETVAR ![FINFO(HPFILE,”FNAME”)]_VAR1 1

• Use HPUSERCMDEPTH.

SETVAR D!”HPUSERCMDEPTH”_VAR1 1 (Note: need a leading non digit)

Command file CMDA calls itself, uses varname VAR1.

• Same answer as case 1, the third solution: use HPUSERCMDEPTH.

There are two son processes. Each one calls CMDA which calls CMDB at the same nesting level.

• Same answer as case 1, the third solution: use HPUSERCMDEPTH. Not sure if this will work since not sure if HPUSERCMDEPTH is reset at JSMAIN, CI, or user process level.

• Use HPPIN and HPUSERCMDEPTH.

SETVAR P!”HPPIN”_!”HPUSERCMDEPTH”_VAR1 1

•Use HPPIN, HPUSERCMDEPTH and HPFILE (guaranteed unique, hard to read)

SETVAR P!”HPPIN”_!”HPUSERCMDEPTH”_![FINFO(HPFILE,”FN AME”)]_![FINFO(HPFILE,

“GROUP “)]_![FINFO(HPFILE,”ACCT”)]_VAR1 1

Again, there is no true local scope, only global scope for CI variables within any one session/job. The techniques presented above do provide at least a reasonable workaround for both system-wide and process-local variables. 


Copyright The 3000 NewsWire. All rights reserved.