|Click here for OmniSolutions sponsor message|
Net.digest summarizes helpful technical
discussions on the HP
3000 Internet newsgroup. Advice here is
offered on a best-effort,
Good Samaritan basis. Test these concepts
for yourself before
applying them to your production or
development HP 3000s.
Making DNS work for FTP
The HP 3000 has domain name services (DNS) that can make tracking down systems in a network so much simpler. Thats why Donna Garverick asked how she might use DNS to make things easier to find in FTP searches: I want folks here to be able to ftp/telnet/blah from our MPE/iX 5.5 PowerPatch 4 boxes to any machine on our WAN that can make such a connection. Duane Percox corrected the question and provided an answer that illustrated how MPE/iX uses DNS:
Not quite the correct question. The question is: If I attempt to open a socket connection on another host by host name (not IP), will the 3000 use DNS to determine the IP of the other host? The answer, I believe, to this question is yes if: 1. You use MPE/iX 4.0 or later for NETIPC applications; 2. RESLVCNF.NET.SYS is setup (for NETIPC applications); 3. You use MPE/iX 5.0 or later (for Posix... duh) 4. rslvconf is setup correctly (as Donna has it, for Posix).
I believe your solution is to setup the RESLVCNF.NET.SYS file following the instructions internal to this file and then voila! You are golden, cooking with gas, good to go, in like flint or whatever phrase you use to indicate a working situation.
By the way, the file does NOT exist on your 3000 system as RESLVCNF.NET.SYS. It comes as RSLVSAMP.NET.SYS. This also can cause some confusion.
Lee Gunter added some field experience and the always-helpful sample configuration file:
We have used DNS for name resolution for all our TCP/IP services, including FTP and Reflection VT-MGR, for some time, and it does work splendidly. All that was required on our (HP 3000) side, was to correctly set up RESLVCNF.NET.SYS. Now we just need to depend upon the DNS name servers somewhere out in the ether cloud. Here is a sample of our setup (with addresses changed to protect my job):
With more HP 3000s coming onto intranets every day, its natural to want to pass the 3000s information everywhere, especially to Web servers through the enterprise. Enterprising manager Steve Murphy asked how to make this happen:
I enabled the Web Server on my NT box and Im am planning on using it for an intranet. For the heck of it I copied an HP 3000 spoolfile that had been generated from Quiz to the server and added a TXT extension. To my surprise, I could actually read the file from my browser. Then I set up a continuously running job to FTP any spoolfile waiting in the device class of INTRANET to the server. It appears that I have a easy way to let many users read some standard reports. So far I like this, because we dont have to modify any of our code, just have the user send the report to a different printer.
So what other ways are there to print from the 3000 to a Web server? What other ways are there to cleanup/add formatting on the way to the server? What other tasks could I be doing with this combo?
John Korb, a consultant whose knowledge is so deep he can teach SPL courses, gave a mini treatise on the topic, one that delivers a great reason to set up that Web server on the 3000:
Well, you could possibly simplify things by running Apache or NCSA Web server on your HP 3000 (assuming you are running MPE/iX 5.0, or even better, 5.5).
With the original files on the 3000 and the Web server running on the HP 3000, the FTP step is unnecessary. Heres why. Set up a new group in the production account, something like WEBOUT, that can be read from anywhere on the system (This may mean changing the access restrictions at the account level). Then, set up an alias in the Web servers configuration file (typically srm.conf) that points to that new WEBOUT group. From now on, any file you place in that group will be a visible file in the Web servers space.
Avoid naming any of the output files index or readme or header, as these are reserved names as far as the Web server is concerned. Now, instead of having your production programs write to the print spooler, have them create permanent files in the WEBOUT group.
If you want to make the reports look pretty and dont want to change the production programs, try this. Create a second new group TOWEB, and have your production programs send their output there. Then, using Posix shell scripting, or MPE/iX CI scripting, or Perl (which is available for MPE/iX and very handy), copy the data from the TOWEB group files to the WEBOUT group, adding any formatting you want in the process. We have some exception reports that the Web server generates on the fly that use color to show the severity of the problem (red means panic!, orange means better attend to this pronto, and brown means better watch this).
The script (which uses both the MPE/iX CI and Posix commands) culls messages from about 50 files to generate the single-page web report. The 50 files are scanned for keywords with the POSIX grep command. The 50 files themselves are reports from 50 different field sites. At each field site a report is run with the output sent to a disk file. That disk file is then DSCOPYed back to the system running the web server software. The users can view any of a number of exception reports, or can look at any of the 50 individual field reports, all from the HP 3000 based Web server.
Now theres a fellow who knows why
keeping that Web server on
the 3000 simplifies the work you can do
with an intranet.
A question from the MPE V side of the customer base brought up sound advice on how to make defragmentation last a lot longer, regardless of the operating system or tools you use on your 3000:
One of my clients has a major problem with their backup on an old MPE/V machine. The backup fails with OUT OF DISK SPACE. The system disk is horribly fragmented, with the largest chunk being around 1300 sectors. Since SL.PUB.SYS is a whole lot bigger than this, Im presuming this is the problem. Running VINIT CONDENSE three times hasnt made a dent in the fragmentation. How can they clean up the disk and recover from this, apart from doing a store @.@.@ and a reload?
Bill Lancaster, whose consulting practice brings ample experience in defragmentation, pointed out that theres a step before trying to defragment:
Wirt Atmar, whose experience runs deep in the world of MPE V, pointed out a trick to balance the load while adding disks:
The real problem is the lack of total disk space and the only long-term solution is to add another disk (of almost any size, equal or greater if possible). No matter how you condense your disks today, if youre this short on disk space, theyll refragment on you reasonably quickly and youll be right back in the same boat almost immediately.
If you do add another disk and its not the same size as your current (Im presuming single) system disk, a very useful trick on MPE/V machines is that you can allocate file loadings that are proportional to the size of the disks. For example, if one disk was 100Mb and the other was 400Mb, when you assign your LDEVs to their CLASSES, you can type in something like the following: DISC 1,2,2,2,2 so for every file that gets RELOADed to LDEV 1, four files will be put on LDEV 2. Doing this will keep your fragmentation problems to a minimum.
John Korb added some suggestions for what to purge, and how:
Just to jog your memory,
What was that LISTF ,2 for? Well, the last time I tried this, the files happened to be restored in an order in which the largest files just so happened to be restored AFTER the smaller files, and the extent sizes of the larger files were larger than the largest contiguous area on any of the drives, so they couldnt be restored. Net result? I purged the account and then went through the LISTF listing to find the files with large extent sizes. I created an indirect file listing those files and specified the indirect file in the first restore. Bingo! The files restored. A second restore of @.@.account with the KEEP option restored the remainder of the account.
Copyright 1998, The 3000 NewsWire. All rights reserved.