Net.digest summarizes helpful technical discussions on the HP 3000 Internet newsgroup and mailing list. Advice here is offered on a best-effort, Good Samaritan basis. Test these concepts for yourself before applying them to your HP 3000s.
Edited by John Burke
One thing that always happens when 3000-L diverts to the wildly off-topic is a handful of people will post messages to 3000-L asking to be removed from the list. So it was again. Apparently they forget to keep around the message they get upon subscribing to 3000-L that tells how to leave. Or they just cant be bothered to look. After a certain amount of argument, it was decided to append a simple two line how-to onto the end of each posting in the hopes this would serve to eliminate such annoying postings. It took exactly seven days for the following warning to come true. The same morons who were somehow able to get onto the list but cant be bothered to figure out how to get off will still post their please unsubscribe me messages. Its just that they will be slightly more humorous due to having the how to instructions appended.
My favorite posting of the
month, though, was this story from CMPs TechWeb: The University
of North Carolina has finally found a network server that, although
missing for four years, hasnt missed a packet in all that time.
Try as they might, university administrators couldnt find the
server. Working with Novell, IT workers tracked it down by
meticulously following cable until they literally ran into a wall.
Maintenance workers had mistakenly sealed the server behind
drywall. My second favorite: New Mexico has become the
first state to offer motorists an online vision test. Huh?
As always, I would like to hear from readers of net.digest and Hidden Value. Even negative comments are welcome. If you think Im full of it or goofed, or a horses behind, let me know. If something from these columns helped you, let me know. If youve got an idea for something you think I missed, let me know. If you spot something on 3000-L and would like someone to elaborate on what was discussed, let me know. Are you seeing a pattern here? You can reach me at firstname.lastname@example.org or email@example.com.
Lets consider some
Posix-related issues first this month.
Since many of us are now using the shell extensively for perhaps the first time, it seems appropriate to discuss what really happens in the shell. This question started the thread: Is an instance of the MPE command interpreter launched whenever a Posix shell cgi script does a callci?
Doug Werth using debug to show that callci from the shell prompt is just an interface to the HPCICOMMAND intrinsic. Thus no new instance of the CI is launched. But what about callci in a shell script? There was speculation that this would result in the creation of a new process for CALLCI.HPBIN.SYS. However, Gavin Scott provided the following:
The functionality of CALLCI.HPBIN.SYS along with many other commonly executed shell commands (including ls) was merged into the SH executable quite a while ago for performance reasons (specifically to eliminate the process creation overhead). The result is that a callci results in a direct call to HPCICOMMAND without any extra process creation of CALLCI.HPBIN.SYS or CI.PUB.SYS or anything other than what might be required by the command youre executing.
If you do a :LISTF,3 on files like CALLCI and LS in HPBIN.SYS, youll see that they havent even been accessed recently, since a copy of their code exists directly in the SHell program executable. If you explicitly execute something like
$ /SYS/HPBIN/LS * then youll be running the code in the external program rather than the built in version, and youll probably notice the extra delay involved too.
Frank Gribben did some searching and came up with:
I did a little digging and came up with the following. Posixs whence disagrees with MPE/iX Shell & Utilities Reference Manual Vol. 2 on a few. I went with whence.
The following commands are built into the shell. Building such commands into the shell increases the performance of shell scripts and allows access to the shells internal data structure and variables. For details on a command, see its man page. These internal commands have semantics indistinguishable from external commands:
alias basename break callci cat cd chmod chown command continue cp echo eval exec exit export false fc frombyte getopts jobs kill let ln ls mv print printf pwd read readonly return rm set shift test time times tobyte trap true type typeset umask unalias unset wait whence.
POSIX.2 recognizes a subset of these commands as special built-ins. Syntax errors in special built-in commands cause a non-interactive shell to exit with the exit status set by the command. The special built-in utilities are:
break continue eval exec exit export readonly return set shift trap typeset unset
As well as built-in commands, the shell has a set of predefined aliases: functions hash history integer r suspend.
Gavin then added, One thing to note is that while the commands that are built in can be executed efficiently without a fork()/exec() most of the time, there are some common cases where at least a fork() is still required.
Any time the pipe | function is used to connect the output of one command to the input of another the shell will have to fork() at least once.
Any expression in backtic ` characters will cause the shell to fork(). For example the shell statement: export now=`date +%H%M`has to fork() in order to capture the output of the date command, even though date is built into the shell. This is the sort of thing which takes virtually no time on Unix but is quite expensive on MPE.
So even a shell script that uses nothing but the built-in commands can still result in extreme slowdowns due to fork() overhead if it uses pipes and the `expression` syntax (maybe other things too, but those are the ones I know of).
A tale of tail or, more Posix smoothing issues
It all started out with the following posting:
I remember that in Unix you can tail -f the stdout of a running program so as to monitor the execution. Is there a way to tail -f the stdlist of a job so that the results of the executing program gets displayed on the screen?
Several people suggested xeq tail.hpbin.sys -f /HPSPOOL/OUT/Onnn, where Onnn is the $stdlist of the job.
Unfortunately, as I pointed out in a follow-up, even when logged on as a user with SM capability, it fails
-f monitors a file as it grows. At the end of the file, tail wakes up every two seconds and prints any new data at the end of the file. This flag is ignored if reading from the standard input and standard input is a pipe.
As I noted,
it does not work at all, let alone correctly, from the shell. This is
not a huge deal since you can come close to simulating the behavior
using CI commands (see below). The real problem, as Donna Garverick
points out, is IO redirection and the Posix tools do not play well
together. To see what she means, stream the following job:
Donna goes on to point out: For the most part, we really enjoy the flexibility of combining older MPE things with newer Posix things, but this (at least to me) is a glaring problem.
Now, as to how you can simulate
something approaching the expected behavior of tail -f on
a jobs $STDLIST, Jeff Vance proposed a simple command file with
a while statement that pauses for two seconds and upon resumption,
displays the last 10 lines of the #STDLIST using
There are also several third-party solutions which mimic tail -f exactly.
Finally, the command file below
uses MPEs PRINT to approximate the operation of tail
-f. It appears to work fairly well in the limited testing
Ive tried. Note the two instances of:
These are necessary to
post the eof of the $STDLIST so that the function
finfo returns the correct value.
From the MPE/iX Commands manual:
Okay, what does this mean and
should we care? I edited together the responses from Gavin Scott and
HPs Jeff Vance to provide an explanation:
The NM parser, which was written
when MPE/XL was developed, is used for most new commands
and is the thing that supports constructs like ;SELEQ=[ACCESS=INUSE],
etc. In addition, many subsystem commands (VOLUTIL, SYSGEN for
example) are parsed by the NM parser.
Unfortunately, the old CM parsers in the CM command executors allowed a number of goofy things in syntax and there are cases where the NM parser will just refuse to deal with this nonsense. So, not all of the parsing code uses the new parser, and there are still a number of commands which are parsed by the same CM code that was used on MPE/V. For example,
1. :file a=b
The NM parser is more or less
a single entity, but many of the CM command processing routines each
parsed their own parameters rather than having a separate shared
parser. So a lot of the problem is that different
commands do things differently, and its not practical to invent
a single parsing module that can be all things to all programs. Most
of the CI commands that are still in CM call the MYCOMMAND intrinsic
and basically parse the command line as they see fit. Note: The
MYCOMMAND restriction of no more than 255 characters in a token
forces CM commands that accept filenames to not support long Posix
As for whether we should care, at least for performance reasons? According to Jeff, I dont know if the NM parser is faster or not. It might not be since it does a lot more work than the MYCOMMAND intrinsic. So, well give the last word to Gavin: Its not going to make a measurable difference in the load on your machine.
John Burke is the editor of the NewsWires HiddenValue and net.digest columns and has more than 20 years experience managing HP 3000s.
Copyright The 3000 NewsWire. All rights reserved.