| Front Page | News Headlines | Technical Headlines | Planning Features | Advanced Search |
  Quest Software Sponsor Message

Net.digest summarizes helpful technical discussions on the comp.sys.hp.mpe Internet newsgroup and 3000-L mailing list. Advice here is offered on a best-effort, Good Samaritan basis. Test these concepts for yourself before applying them to your HP 3000s.

Edited by John Burke

I’ve got a full plate of topics this month so I am going to forego the usual snappy patter and get right to it. But not before repeating my plea to readers:

I would like to hear from readers of net.digest and Hidden Value. Even negative comments are welcome. If you think I’m full of it or goofed, or a horse’s behind, let me know. If something from these columns helped you, let me know. If you’ve got an idea for something you think I missed, let me know. If you spot something on 3000-L and would like someone to elaborate on what was discussed, let me know. Are you seeing a pattern here? You can reach me at john.burke@paccoast.com or john_burke@pacbell.net.

FTP = File Transfer Protocol: Well, sometimes anyway

FTP/iX is alternately the most reviled and praised software on the HP 3000. Actually, it is mostly reviled. Every month there are several threads on 3000-L devoted to some nuance or idiosyncrasy of FTP/iX. Of course you would not know that by reading Hidden Value or net.digest because I rarely devote any space to FTP/iX. This is because I’ve felt most of the discussions were narrowly focused on aspects of FTP/iX that would not have wide interest. That is starting to change now that more and more HP 3000s are connected to the Internet. Interest in FTPing patches (in this case a “fixed” version of Patch/iX) from the RC, not to mention using Patchman (discussed here several months ago) to determine availability and suitability of patches, is clearly increasing.

Lee Gunter has been trying to use Mark Bixby’s Patchman script, but FTP/iX, essential for the process, has been misbehaving. Apparently, it does not play well with some proxy servers or firewalls. Hopefully someone will raise this and other FTP/iX issues with HP at this month’s SIG3000 meeting. Here is Lee’s story:

“At Mark Bixby’s suggestion, I’ve opened an enhancement request with HP to support proxy FTP from the HP 3000’s client, FTP.ARPA.SYS. This followed an extremely frustrating attempt to use Mark’s Patchman script and to access the patch database on HP’s Jazz Web site. We finally determined that MPE/iX’s FTP client doesn’t support firewall account prompts.

Here is the text of the request I sent to HP with their response, including the enhancement request number.

FTP.ARPA.SYS doesn’t appear to support proxy FTP.

We’ve been attempting to use FTP from our HP 3000 hosts to access patches and tools from HP, but this client apparently does not support access through proxy servers (firewalls) which require separate user and password info via an FTP “Account” command. Here’s a sample session:

File Transfer Protocol [A0008T34] (C) Hewlett-Packard Co. 1990
tp> open jazz.external.hp.com
220 Secure Gateway FTP server ready.
Connected to jazz.external.hp.com (192.6.38.5). (FTPINFO 40)
Name(manager): anonymous myuser (‘myuser’ is the firewall account ID)
331 Password required for destination user ‘anonymous’.
332 Enter Gateway Password (use the ‘account’ command to respond)
Account: mypassword (firewall account password)
530 Access denied.
RECEIVED A GRACEFUL RELEASE OF THE CONNECTION. (SOCKERR 68)
ftp>

I believe that, normally, a “Password:” prompt should appear to accept the remote host login password but it does not; instead, the Gateway Password prompt appears, bypassing the remote host password prompt. Perhaps the proxy user ID entered in the “User:” prompt response is being used for the remote password?

If I enter only the “anonymous” user ID without the firewall account ID, the prompts display correctly, but I’m unable to determine how to respond to the “Account:” prompt. Our server folks say it must be an MPE/iX FTP client anomaly, because they always use the former syntax from any other client.

If this is, indeed a lack of functionality in the MPE/iX FTP client, please enter an enhancement request to add this capability; otherwise, please advise me on the proper procedure for passing security on the proxy server.

HP’s reply:

HP Phoned Customer

Entered by HP Engineer: XXXXXXXXXX Date: 17 Jan 00 8:36 PST

Have seen this same issue at a few other sites.

Per customer request, submitted enhancement request JAGac56710.

36 gigabytes? Why I remember when 48 Mb was the largest drive…

The question was how to move a large system from one machine to a completely new system, including disk drives, in the quickest way possible and minimizing downtime. In this particular case, it is a 7x24 shop and its online backup to a DLT4000 takes 16 hours!

Stan Sieler came up with an interesting approach to this particular problem, an approach that can be extended to solve a variety of problems in large 7x24 shops. (It requires being on MPE/iX 6.0 PowerPatch 1)

• Buy a Seagate 36Gb disk drive (ST136403LW, about $1,100 in an external case).

• Configure the Seagate on both the old system and the new system.

• Connect the Seagate on the old system.

• volutil/newset the Seagate to be a new volume set, “XFER” (REMEMBER: Volume set names can and should be short names!)

• Do one (or more) STORE-to-disks using compression with the target disk being the new Seagate drive.

(Why more than one? If you have more compressed data than will fit into 4Gb, I don’t know what STORE-to-disk will do.) For example:

:newgroup xfer.sys

:newgroup xfer.sys; onvs=XFER

:altgroup xfer.sys; homevs=XFER

:file xferA; dev=99 (where 99 is the XFER disk)

:store /A@ ; *xfera; compress

:file xferB; dev=99 (where 99 is the XFER disk)

:store /B@ ; *xferb; compress

• When the entire system is backed up onto the XFER disk, VSCLOSE it and unplug it (Caution: The safest approach is to power off your system first.)

• Attach the new disk to the new system (see caution above) and reboot.

• Set up the XFER group on the new system.

:newgroup xfer.sys

:altgroup xfer.sys; homevs=XFER

• restore the data

:file xferA; dev=99 (or whatever ldev XFER is)

:restore *xferA; /; olddate;create (if necessary)

Obviously, this leaves out interesting things like setting up UDCs, directory structure, etc. The point of this note is to introduce the concept of using a 36Gb disk drive as a transfer media.

Bijo Kappen and Patrick Santucci both pointed out that TurboStore’s store-to-disk module is smart enough to create another “reel” when the 4Gb file limit is reached. From the TurboStore/iX documentation:

If STORE fills up the first disk file specified for the backup, it creates as many additional disk files as needed, or uses existing disk files. They will be built with the same default file characteristics as the first disk file. The naming convention used for additional files is to append the reel number to the end of the first disk filename. The resulting name will be an HFS-syntax name. For example, if STORE needed three disk files to store all files, they would be named:

/SYS/MYBACKUP/STORDISC

/SYS/MYBACKUP/STORDISC.2

/SYS/MYBACKUP/STORDISC.3

John Lee reported doing the very thing Stan suggested:

“This does work. We do it all the time here when moving information between systems.

“Another variation we’ve found useful is using large, inexpensive, disks for archive purposes. Instead of purchasing often expensive archival devices such as CD or optical jukeboxes, just throw the information on some cheap hard disks inside a cheap enclosure and hang it off your system. Users then have access to all this information online. It might not be right for everybody, but in many cases it is.”

Let’s see now, it is a leap year, so therefore consolidate is in and distribute is out

As the section title suggests, I have a somewhat cynical view of this argument since over the years HP has championed both consolidated and distributed systems — the choice depending upon the product line on the price list at the time. Anyway, this thread was started with the following question:

Would it be cost effective to have one huge 3000 that would be located in our MIS corporate headquarters supporting 6-8 plants remotely or instead should we purchase 6-8 smaller boxes for each of our plants?

Wirt Atmar responded: “Given the quality of the Internet— and the quality that’s coming — I would think that all of the dominoes now fall in favor of having one large, centralized HP 3000, with remote users connected either directly through the Internet or through VPNs.

System management will be enormously easier when you have to manage only one site rather than a half dozen — and that very directly affects costs. Further, there is far less risk (and much less complexity) in a single-machine solution than there is a multiple-machine architecture — and that too very directly affects costs. There are no worries about synchronization of backups or their reliable executions.

One large machine also allows you enormously greater flexibility to establish new locations and new manufacturing sites, even if only temporarily. Nowadays to establish a remote location all you need do is pull a trailer up to a construction site, find a phone line, and telnet back into the central office. In just a very few minutes, you can put a remote office anywhere on the planet — and that too very directly affects costs.

And one large machine very much simplifies user training and remote responsibilities — and that too very directly affects costs.

In essence, when you run one large machine, you become your own ASP (application service provider). It’s the way the world is going. I have for some time now believed that it is the way that we should be going too.”

John Lee chimed in with:

“The only drawback to this I can think of offhand is that you become dependent upon your communication link, another possible point of failure. If your remote sites cannot be down, then I think you have to lean toward having the system on site, which in turn allows different sites to back each other up (an advantage). Otherwise, I completely agree with everything Wirt says here.”

Jeff Kell added:

“From an IT perspective, one centralized server has the advantages of:

• single point of maintenance, updates, system administration, and operations

• no “field trips” troubleshooting remote problems (assuming you don’t staff each plant with IT people)

• no worries about synchronization of centralized data (remote systems don’t need to phone home periodically)

A few disadvantages:

• single point of failure should it go up in smoke (can be avoided by mirroring/arrays, etc., which is easier to cost-justify for one system than 6-8 separate sets of redundancy)

• network WAN infrastructure is [more] critical to the satellite plants — you would likely want a dedicated primary circuit with something like an ISDN dial-on-demand backup

• higher cost of software that may only be used by special groups, as it must be licensed for a higher tier.

To eliminate the latter case, you may want to consider buying a small machine (918 or similar) to use for developers and specialized apps. Development tools are expensive on big machines.

Rich Gambrell also added:

Depending on what kind of risk you want to run, you might consider a split approach to centralizing the system. Pick two sites to run “central” servers and split the work between them. Add redundant (or use Internet access as a backup) communications lines between the other sites and both of these. Then keep the two servers in sync with a replication product, and you have a high-availability-type solution that may still having significant cost savings over distributed systems (considering all costs — including the cost of downtime).”

And, finally, my two cents. I’ll take the simplicity of a single central server or server farm over the complexity of a distributed system any day. With the availability, reliability and cost of communications bandwidth constantly improving, the greatest barrier to the central site model is being removed. Having said that, I do have to admit there are still situations where a distributed computing model might make sense for political or practical reasons. Unfortunately, it is not always easy to quantify ease of management and maintenance so, if doing an analysis for an organization, rather than starting off neutral, I would recommend designing for the central site model. Then consider moving to a distributed or hybrid model only if there were compelling non-economic reasons. 


Copyright The 3000 NewsWire. All rights reserved.