Net.digest summarizes helpful technical discussions on the HP 3000 Internet newsgroup and mailing list. Advice here is offered on a best-effort, Good Samaritan basis. Test these concepts for yourself before applying them to your HP 3000s.
Edited by John Burke
On the lighter side, our British friends were enlightened on the US meaning of pasties (in the UK it means a pastry containing vegetables and meat). There was also a lengthy thread exploring the HP Old-timers League Qualification Test. Wirt Atmar treated us to reminiscences of his experiences with the space program on the thirtieth anniversary of the explosion on the Apollo 13 mission.
On a more somber note, it was pointed out that HPs heavily publicized HP Garage Program makes no mention of the HP e3000. There was the usual response from HP that With respect to the Garage Program, it is currently focused at new Internet startups. For example, part of the program includes connecting startups to the right hosting provider, and offering special finance options. This is primarily a UNIX and NT initiative, because the HP e3000 is targeting the Internet companies from a different angle. The HP e3000 is working directly with our ISVs to take their application to new markets such as e-commerce. Does this mean HP does not expect any new applications to be developed on the HP e3000? Sigh.
Several months ago, I wrote about Mark Bixbys Patchman script and noted a cautionary warning that on some systems (one out of three for me), the script would consistently hang the session and only a reboot would kill it. While we still do not know what is causing the problem, it appears that it only happens if the script has been saved as an MPE fixed record-type file. Converting the script to a bytestream file eliminated the session hangs for everyone who had experienced them.
As always, I would like to hear from readers of net.digest and Hidden Value. Even negative comments are welcome. If you think Im full of it or goofed, or a horses behind, let me know. If something from these columns helped you, let me know. If youve got an idea for something you think I missed, let me know. If you spot something on 3000-L and would like someone to elaborate on what was discussed, let me know. Are you seeing a pattern here? You can reach me at firstname.lastname@example.org.
The story on the Model 12H AutoRAID: YMMV.
The Model 12H AutoRAID is relatively new to MPE. And RAID, as a concept, is relatively new to most MPE system managers. Over the last few months, there have been many questions on 3000-L about RAID in general and the Model 12H in particular. Unfortunately, there is little performance data available. Weve followed these questions both here and in Hidden Value and will continue to report on the state of the 12H and MPE. This months thread was initiated by the following multipart question:
New budget money is a wonderful thing. Im trying to research the 12H disc array and, of course, Ive come up with more questions than answers.
Is it true that the 12H is supported as of MPE/iX 5.5 PP7 (plus supplemental patches)?
Can it be used for the system volume set (including ldev 1)?
Can LDEV 1 be on a 9Gb drive?
Are there any performance issues?
Bill Lancaster provided the following insights:
The 12H (AutoRAID) is supported as of MPE/iX 5.5 PP7. It can be used as the system volume set and a 9Gb drive can be configured as LDEV 1. But you will only be able to address up to 4Gb, the same as with JBOD.
The performance implications of AutoRAID are another story. As with all things performance, it depends is again the right answer. If youre planning on using these in a near-line storage capacity (archiving etc.) it should be fine. If you are using it in an OLTP environment with a mildly I/O sensitive application, youll probably still be fine. If you are planning on using it in an OLTP environment with heavy I/O sensitivity you may have a problem.
There arent a huge amount of these [arrays] installed on MPE systems, but there are some. A few of my customers have them and at least one company is forcing a switch from AutoRAID to the XP256 just for performance reasons.
Our (Lancaster Consultings) official position is that:
for MPE always configure the 12H as RAID 1 (RAID 5 performance on MPE is awful)
thoroughly test the 12H in your environment before moving it into production, and
avoid significant reduction in spindle counts when moving to AutoRAID.
One user of the 12H contributed the configuration they are using for everyone to see. They have a system volume set consisting of two logical drives, each 4Gb and a user volume set consisting of three logical drives, each approximately 12Gb.
Another user provided their experience: We have a system on 6.0 Express 1 plus some patches with ONLY a single AutoRAID configured very similarly. Performance is as good as the same system when on a collection of internal and Jamaica discs on three FW-SCSI channels.
We are waiting for a software patch and/or a firmware fix that should improve performance about 10 percent. The AutoRAID cache sometimes gives very high I/O rates that can result in much improved performance.
The AutoRAID handles the Raid 1 and 5 balance automagically, so there is no direct config of those attributes. About all one can do is ensure sufficient disk is available (outside of configured LUNs) so the AutoRAID can keep as much as needed in RAID 1. So far the balance on our system is very good. But it takes time for the data to migrate to the mode where it should eventually stay. Note we do not have large databases but do have 468,000 smallish files.
Where are we with the 12H? Some people are having good success, but at least one user is switching to the XP256 because the 12H does not provide sufficient performance. So, I guess we are still left with your mileage may vary.
How high will they go?
The primary reason to go to MPE/iX 6.5 now is for large files. If you need them, you need MPE/iX 6.5. And, as was noted in Hidden Value this month, you can go directly from MPE/iX 5.5 to MPE/iX 6.5 without stopping at MPE/iX 6.0. However, you might ask, as the following poster did, which files are large.
In the HP documentation on MPE/iX 6.5 new features, it reads: Support of fixed-length records and KSAM files of up to 128 Gb. There is no mention of TurboIMAGE (plain vanilla, not jumbo datasets). Is TurboIMAGE included in the above statement?
Christian Lheureux responded first with:
What is included in the large file project is, at this time, flat files (fixed records only) and KSAM files. No TurboIMAGE, apart from the now well known Jumbo Dataset feature. No Allbase.
Jerry Fochtman elaborated:
No, IMAGE does not yet support the new, 128Gb file sizes. One still needs to utilize the jumbo dataset feature of IMAGE to go beyond a 4Gb file for larger dataset capacities. However, the good news is if you have a jumbo master set which contains more than 3.75Gb of key data, you can now successfully attach a b-tree index to that master set. Furthermore, you can fully extract a jumbo set to a large file and sort it either using MPE SORT or utilities such as Suprtool from Robelle.
From Craig Fairchild, HP 3000 file system architect:
IMAGE has not yet adopted large file capability to extend the size of IMAGE databases. Through the use of jumbo datasets IMAGE already has the capability to grow dataset sizes well beyond the 4Gb boundary to 80Gb. At some point in the future the IMAGE team will make use of large files as the underlying technology behind their large databases, but this is not the case in release 6.5 of MPE/iX.
One of the main objectives of the large files capability introduced in release 6.5 was actually to complement jumbo datasets! The idea was to allow worry-free report generation (using tools like Supertool) and easily sortable extracts of databases, even when the size of these reports goes beyond 4Gb.
Chairman of SIGIMAGE Ken Sletten brought some reality to the discussion:
One thing worth noting: since sorting records is essentially an exponential of the number of records involved, if you try and sort 70Gb of data from an IMAGE detail in a large MPE flat file you might be able to go out and have dinner while you are waiting for it to finish. Several times.
Jerry Fochtman gave a real example of this:
As a point of reference, some time ago we needed some 80Gb datasets for some product testing. On a 996-multi processor system it took just over seven days to load an 80Gb detail (406,000,000 entries) with two masters, running as the only process on the system. While HP SORT is real good about its use of resources and its sorting performance, you may want to plan on taking your vacation should you launch a process to extract/sort/report on this level of data volume.
James Clarke then opined:
Adding more space to IMAGE datasets would only suffice in the short term. When I first changed from IMAGE to TurboIMAGE years ago, HP changed some of the pointers from 16bit to 32bit. I believe the saying then, before PA-RISC what are you going to do with an IMAGE database with four billion records? Well, jumbo sets give you more room to accomplish the four billion record limit. But until HP again changes the underlying structure, having more space will only make you cry when you hit the 4Gb limit. So until users start hitting the limits, HP will probably tinker with structure changes within the labs, until they come up with something that answers the need and allows forward conversion without major problems.
Finally Ken Sletten replied:
With the current EntryByName scheme that TurboIMAGE uses, you can put up to 80 Gb of data in one dataset (was up to only 40 Gb until recently a relatively minor change to use a sign bit allowed double current JUMBO scheme max to 80 Gb).
But, the maximum number of individual records that can currently be entered in one dataset depends on Block Factor.
However, coming soon will be a change of EntryByName to EntryByNumber (a migration utility will be provided). With respect to IMAGE internal limits, users will then be able to enter up to two billion records in an IMAGE dataset regardless of record size. Considering that this is a 250-fold increase over what people have been living with for datasets with Block Factor = 1, hopefully that will hold most users at least for a little while who bumped up against that particular limit.
Eventually, with a combination of the EntryByNumber internal limit expansion and MPE Large Files, a single TurboIMAGE dataset will (at least theoretically) be able to hold 10 TERABytes!
I wonder how soon it will be before that is not enough? Place your bets.
Copyright The 3000 NewsWire. All rights reserved.