Net.digest summarizes helpful technical discussions on the HP 3000 Internet newsgroup and mailing list. Advice here is offered on a best-effort, Good Samaritan basis. Test these concepts for yourself before applying them to your HP 3000s.
Edited by John Burke
Of course, it would not be 3000-L without the many off-topic (and now wildly off-topic) postings to delight and amaze. Anyone who thinks people who work with computers are dull has never read 3000-L for even a week. Topics ranged from giving humorous examples of why the English language is so hard to learn, to a song for Nasdaq (sung to the tune of American Pie). From the real meaning behind the Wizard of Oz (including the admonition that sometimes a flying monkey is just a flying monkey) to Microsoft advertising. From the monthly hand-wringing over the future of the HP e3000 to some outright hilarious (and supposedly true) entries from a Dilbert Quotes contest. (The winner, attributed to an executive at Microsoft: As of tomorrow, employees will only be able to access the building using individual security cards. Pictures will be taken next Wednesday and employees will receive their cards in two weeks.)
As always, I would like to hear from readers of net.digest and Hidden Value. Even negative comments are welcome. If you think Im full of it or goofed, or a horses behind, let me know. If something from these columns helped you, let me know. If youve got an idea for something you think I missed, let me know. If you spot something on 3000-L and would like someone to elaborate on what was discussed, let me know. Are you seeing a pattern here? You can reach me at firstname.lastname@example.org.
Jumbos, chunky style
Jumbo datasets have been around for a while now, but it is hard to tell about the penetration of jumbo datasets in the user community beyond users of the Amisys application. We can assume, however, that gradually more and more sites are going jumbo since the topic comes up periodically on 3000-L. For example:
Q: We have several datasets that will have to go jumbo soon. Is there anything special to look out for?
Q: What determines the number of chunks in a jumbo dataset? We have one dataset with 32 chunks. All of our other jumbo datasets have a seemingly more reasonable two to six chunks. Is this documented anywhere? Does this unusually large quantity of chunks indicate some problem?
Joseph Rosenblatt replied to the first question with:
There are a couple of things to watch out for. The biggest issue is that the extended jumbo part of the file resides in HFS space. This means that in order to store, or restore, the whole dataset you must use HFS conventions e.g. STORE /DATAACCT/JUMBOSET/
It also means that it wont show up in a regular MPE LISTF listing. You must use LISTFILE with HFS conventions. If the datasets MPE name is XXXDB22 then the Jumbo file names will be XXXDB22.001, XXDB22.002, etc.
[Editors note: For many sites this may be their only contact with the HFS, thus increasing the danger that part of the database will go missing.]
Mike Hornsby added: Going jumbo also means that FSCHECK cant be used to purge any corrupted jumbos or temporary versions of the jumbos left from an aborted restore. FSCHECK cant purge HFS files.
In response to the second question, Stan Sieler replied:
32 chunks is not a problem. The jumbo code allows up to 99 chunks (easily expanded to 999 by a recompilation of IMAGE source code, if ever needed). Theres a chance that the 32 chunks were caused by some early version of IMAGE and/or a tool, perhaps trying to allocate space on a checkerboarded system.
Jerry Fochtman added: The minimum number of chunks for a dataset is essentially determined by dividing the total space needed to house the data volume by 4 Gb (max size of each chunk file). Using individual chunk files smaller than 4 Gb will then require more chunk files to contact the same data volume. The current maximum number of chunks that can exist for a single dataset is 99. However, it is more likely that one would exhaust the ability of IMAGEs current record pointer format to address all the entries in a set before reaching 99 files at 4 Gb each.
There are scenarios whereby having multiple chunk files that are less than 4 Gb in size have benefited performance.
Chances are if a third party tool was used to convert the dataset from a standard set to a jumbo set, the tool may have produced the multiple chunk files. I would suspect that the individual chunk files are not maxd out at 4 Gb in size (16,000,000+ sectors). It would be possible to maximize the size of each chunk file, thus reducing the number of chunks by performing a reorganization. You might consider contacting your third-party tool provider for guidance.
Finally, Ken Sletten, SIGIMAGE Chairman added:
As in many cases, there is a slight divergence between theory and reality.
Internal TurboIMAGE limits currently restrict the maximum size of one DETAIL dataset to 80Gb or less (depending on Block Factor), using 4Gb JUMBO HFS chunk file extensions. With the pending IMAGE enhancement to move from EntryByName to 32-bit EntryByNumber (IMAGE will continue to support both formats), HP could also choose to increase the number of allowable JUMBO chunk files. But, HP has indicated that a future release will support using MPE Large Files as IMAGE DETAIL datasets. Since 128Gb Large Files are in MPE 6.5, my unconfirmed guess is that HP will likely try and go directly to MPE Large Files in IMAGE instead of first increasing the number of JUMBO chunk files from the current 99 maximum (the format easily accommodates a 999 maximum).
Are you ready for MPE/iX 6.5?
As you fill out those requests for MPE/iX 6.5, consider this from Jeff Vance of the HP 3000 division (CSY):
Some recent feedback CSY has received is that there are still many customers who are not aware that in MPE/iX release 6.5 HP-IB and FiberLink (FL) devices will not be supported. CSY is trying to reach as many customers as possible to minimize surprises when 6.5 is installed.
Also, I have a script on Jazz that reads your IO configuration file and reports all HP-IB and FL devices. CSY recommends that you run this script to ensure you are HP-IB safe before you update to 6.5. Please see jazz.external.hp.com/src/ scripts/hpib.txt .
The last time ever I saw your face...
The originator of this question was probably looking for something like the CSL program BOUNCER as a solution to his problem. However, the way the question was worded led many people to go off in several interesting directions and proved once again that the contributors to 3000-L are very creative people.
From Jeff Vance: BOUNCER was already mentioned as a way to log off inactive users, and there may be other tools that do this. Plain MPE does not support this capability, but you can write a fairly simple script that sorts the SHOWJOB output by INTRODUCED date. Or, you can combine the new JOBCNT and JINFO functions (if you are on MPE/iX 6.0 PP1) to output just the jobs that are older than X days. There is a version of such a script on our Jazz Webserver at: http://jazz.external. hp.com/src/scripts/jcnt.txt
[Editors note: This is pretty cool. It is one more example of how the new CI functions introduced with MPE/iX 6.0 PP1 (JINFO, JOBCNT and WORDCNT) may be sufficient reason to go from MPE/iX 5.5 to MPE/iX 6.0 and not wait for MPE/iX 6.5 to achieve stability.]
From Barry Lake: If you are looking to determine the last time a user logged on and are using one of several available third-party logon security packages for MPE, then this information is tracked in the packages database.
If you dont have a package but would like to start tracking this information in a way that you control, then add some code to your system-wide logon UDC that writes who, date, and time information to a file (or database).
Alternatively, you can use the system log files (with job initiation and job termination logging turned on) and either create your own reporting program or purchase one.
Mike Berkowitz suggested:
Assuming that either the job name or user name is unique, then a low tech solution could be done.
In a logon UDC for each user you want to track:
:PURGE !HPUSER or !HPJOBNAME
:BUILD !HPUSER or !HPJOBNAME
When anyone wants to see the last time a user worked just do this:
:SETVAR LASTWORK FINFO(user or jobname,CREATED)+FINFO(user or jobname,CREATETIME)
Lars Appel then offered a POSIX variant that keeps all the files in one place:
:xeq /bin/touch /somedir/!hpjobname_!hpuser_!hpaccount
With this approach you would check for the last modified date of a specific entry in the /somedir database that builds over time. Or you could use /bin/ls -lt to view it sorted by time.
[Editors note: one of the beauties of this last approach is that it does not require you first purge the file. touch will update the last modified and last accessed entries, leaving the created entry alone.]
Yes, Virginia, many people still use DTCs
A lot. And still have questions about how to use them. Every month or so, someone asks a detailed question about DTC configuration. For example,
I have two HP e3000s with the same DTCs configured on both. I noticed when a DTC was re-booted, it downloaded its configuration sometimes from one HP e3000 and sometimes from the other. What mechanism is it that tells the DTC to download its configuration from one HPe3000 instead of the other? Can I configure the DTCs to always download from one HP e3000?
Prior to MPE/iX 5.5, you had to use HP OpenView DTCMGR to do any of the real fancy stuff like switched ports. You also had to reboot to change the configuration of even one port on one DTC. Imagine having to manage 50-plus DTCs with several hundred printers connected. Seemed like we were rebooting several times a week just to add or change a printer. Ugly. But MPE/iX 5.5 changed all that. Managing DTCs can still be a pain, but at least now it need not consume your life.
Doug Werth of Beechglen Development provided a comprehensive reply (slightly edited for space) to the above question about switched ports, complete with reference to the appropriate documentation:
The DTC does not request a download from any particular system. It just sends out a request for a download and the first system to answer is whom it listens to, whether that is an HP e3000, an HP 9000, or an OpenView DTCMGR workstation.
You can set up SWITCHED ports that will allow the DTC to speak to both systems, but only one system would be the controller. As of MPE/iX 5.5 you no longer need OpenView to set up switched DTCs. The manual is Configuring Systems for Terminals, Printers, and Other Serial Devices, which can be found online at docs.hp.com/cgi-bin/doc3k/B3202290034.15956/1
In a nutshell, on the first screen in NMMGR (after opening the configuration file), set up the slave system as a system that is running HP OpenView DTC Manager (I know, I know, you really arent running OVDTCMGR, but the system thinks you are.) Next, go to the DTS configuration and tell it how many ports you want to allow to come in via the switched method (non-nailed terminal ports). Then, configure each DTC with its NAME.DOMAIN.ORGANIZATION (NDO). You dont need to give it the MAC address, it uses the NDO for communication.
On the master system, edit the terminal profiles. Look for the Go to Switch function key. Set the Enable Switching to Y and the Automatic Connection to N.
The key to making this work is that the NDO must exactly match what you have configured on the master system (DTC01.DOMAIN.ORG). It is best to use the same DOMAIN.ORG for the DTCs that you have for your system: PROD3000.DOMAIN.ORG, TEST3000.DOMAIN.ORG, DTC01.DOMAIN.ORG, etc.
If all goes well, when you press return on a terminal you will get a DTC> prompt instead of a : prompt. The user can enter C PROD3000<return> or C TEST3000<return> depending upon which machine they want.
There are a lot of options and subtle changes that can be made, and I know I havent covered half of it here (nailed devices, printers, automatic connections), but this should get you started.
Copyright The 3000 NewsWire. All rights reserved.