Net.digest summarizes helpful technical discussions on the HP 3000 Internet newsgroup and mailing list. Advice here is offered on a best-effort, Good Samaritan basis. Test these concepts for yourself before applying them to your HP 3000s.
Edited by John Burke
On the lighter side, we were regaled with reminiscences of IBMs TSO, HPs Windows clients, and the language REX/3000. Then there were the derivations of the word bursar and the phrase Theres more than one way to skin a cat. And several lengthy threads on math and probability problems, as well as a horror story about a golf e-commerce site that sent an apology letter to all its customers because its project to convert to systems based on Oracle was a disaster. But my favorite posting during the month, as one who has struggled to even use vi, was the following, attributed to firstname.lastname@example.org:
To me vi is Zen. To use vi is to practice Zen. Every command is a koan. Profound to the user, unintelligible to the uninitiated. You discover truth every time you use it.
As always, I would like to hear from readers of net.digest and the other column I edit, Hidden Value. Even negative comments are welcome. If you think Im full of it or goofed, or a horses behind, let me know. If something from these columns helped you, let me know. If youve got an idea for something you think I missed, let me know. Are you seeing a pattern here? You can reach me at email@example.com.
Pssst. Wanna buy some memory? How much can you afford?
In March, many of us received a mailing from a memory vendor who shall remain nameless, a message that talked about greater memory requirements for MPE/iX 6.5 and made statements such as:
The most important of these features [of MPE/iX 6.5] is the abolishment of the 3.75Gb memory addressing performance limitation... Customers upgrading to 6.5 will experience enormous performance improvements as they configure memory up to the new physical maximums... Customers can expect to more than double overall speed and throughput under these new supportable memory configurations with no corresponding software upgrade charges.
And, finally, the kicker that got people talking on 3000-L:
Minimum memory requirement for MPE 6.5 is 4Gb.
This last was included on a page with a handy-dandy HP Memory Calculation Formula you could use to determine how much more memory you needed to purchase, presumably from this vendor. According to this formula, my 959/400 production box that happily supports 600 concurrent users on MPE/iX 5.5 PowerPatch 7 with 2Gb of memory should have 8.2Gb of memory for MPE/iX 6.5! I dont think so.
I note this is the same vendor which makes the slightly sleazy offer of upgrade to one of our promotional memory configurations... and well send you to HP World, all expenses paid.
Therefore, a how much memory will I really need? thread was started by some poor system manager trying to respond to his managers questions about the memory vendors mailing:
My manager received one of those HP memory calculation formula sheets from .... He plugged in the numbers and it told him we needed to add at least 4 Gb of memory to our 939KS. They even have a line that says the minimum memory requirement for 6.5 is 4Gb. If anyone else has seen this mailing, how does it compare to your actual memory configuration? I would like some real world comparisons to use in my response.
Jerry Fochtman of Bradmark responded about the 4Gb minimum requirement: This is an incorrect statement, as weve loaded 6.5 on test systems with significantly less memory than 4Gb and they ran just fine.
Jeff Kell noted: Well, consider the source. Youd never ask a barber if he thinks you need a haircut, would you?
Steve Cole: Ive never been one for using a formula for calculating memory. The amount of memory that is optimal for a given system is dependent on a lot of different things that affect the memory loading. On the other side, its rare to find an HP 3000 that doesnt benefit when more memory is added.
Dennis Heidner suggested: There are all kinds of caveats with the memory including application mix, memory interleaving, mixing 512Mb modules, 256Mb modules, carriers, etc. I prefer to use the rule of thumb that new machines start with 1Gb if possible and, if the budget permits, max out the memory. Almost every version of the OS takes additional memory (but not as much as the suggested increment).
Mike Hornsby added: Any memory requirements formula would have to be based on: applications type/mix, processor speeds, and number of active processes. In my opinion the formula from the memory vendor doesnt seem to take any of these into consideration...I have seen many cases where additional memory will speed up read-only batch processes to the detriment of interactive response times. So one has to question claims like 50 percent faster and ask for more specifics based on the type of performance problem at hand... A primary concern in adding memory to a multiprocessor system is the memory interleave configuration. In my opinion, the amount of memory added is secondary to the number of interleaves created.
Ill give the final word to noted performance guru Bill Lancaster: The 1Gb per processor is dead wrong. The right answer for how much memory you should buy is How much can you afford? There is currently almost no point of diminishing returns on memory (except for some very unusual edge cases), although this may change with 6.5 as more memory becomes configurable (up to 16Gb).
Many times people have used convoluted rules-of-thumb to answer this question. Largely these are a waste of time, especially since memory is so cheap. Isnt competition a wonderful thing? Additional memory often helps online transaction performance, but very often dramatically improves read-oriented serial batch performance. Bottom line is, buy as much as you can afford and dont spend a lot of time trying to justify it, if you can help it.
Okay, I lied, since I am going to have the final word after all.
Unfortunately, no one from HP chimed in on this thread with an answer on 6.5 memory requirements. But I think there is a good reason for this: no one really knows yet. There is very little real-world experience with 6.5 to make a judgement on memory requirements. Because of all the variables involved, I doubt we will ever have a one-size-fits-all formula for optimum memory size. Bills buy as much as you can afford guideline to which Ill add dont try to save money by skimping on memory will likely remain the best advice.
Now that Apache/iX is supported by HP (at firstname.lastname@example.org for Apache/iX on MPE/iX 6.0, and on 6.5, by the Response Center), many people who held back initially are now making their first efforts at deploying Apache/iX in production. I hope to make Practical Apache an occasional feature of net.digest to share tricks of the trade.
When you are first trying to use Apache/iX in production, one of the initial issues you must deal with is how to bring other people into the publishing arena cleanly, without creating security issues. One questioner asked on 3000-L:
How are other sites allowing various application groups to tie into Apache? That is, on my crash-and-burn system, Ive been modifying the .../htdocs/index.html file to hook in different things. Thats fine for just me, but not in production. Are you doing something like http://your.system/~MGR.<account>/? Thats quite doable but it seems a bit unsophisticated to me. Of course, I certainly dont want folks literally placing files in the Apache account, so whats the trick here? Symbolic links?
Several people answered similarly, but Mr. Apache, Mark Bixby, put it most succinctly:
The way I handled this when I was webmastering for www.cccd.edu was to have the users create their own content in their public_html/ subdirectories, and then test it via:
When they were satisfied, and I was satisfied, I created a symlink in the DocumentRoot directory to point to the users public_html/ subdirectory. So then the user could then publish their URL as:
http://hostname/symlinkname/content.html. So the cumbersome ~username syntax would no longer be necessary.
Choosing a blocking factor: an exercise in futility?
Many of us spent considerable effort in our Classic MPE days figuring out optimal blocking factors for files. I even remember writing a program that took record size and number of records and determined optimal blocking factor and total wasted space. I know, at least intellectually, that on MPE/iX, blacking factor does not matter. But what is the real story? A poster asks:
Ive just been looking over the blocking-factor messages in the archives and Im afraid Im still somewhat confused. It has been claimed that the best blocking factor can be had by just not specifying one and letting MPE decide. So I did some testing. Looks to me like if you dont specify a blocking factor, MPE divides the sector size (256 bytes) by the record size (adjusted upward to be an even number if necessary). It drops any fractional portion to get the blocking factor and if the calculated number is less than one, it uses one.
But doesnt MPE deal with 4096-byte pages now? Does the blocking factor matter? How does changing it change the efficiency of the storage or performance of the retrieval of the data?
A number of people took a stab, but when we can get an explanation from Craig Fairchild, file system architect for MPE/iX...
Several people have already answered, so Ill just try to summarize. The blocking factor on MPE/iX is primarily used as a means of compatibility with MPE/V. It comes into play in two situations:
1) When using the STORE ;TRANSPORT option (to create tapes in the MPE/V store tape format)
2) When reading a file that has been opened with either NOBUF or MRNOBUF
In case 2, above, the blocking factor is used to insert fill characters into the block of data that is returned to the caller, so that it looks exactly like it did on MPE/V, when data actually was stored in a block-by-block basis.
On MPE/XL and now MPE/iX, data is stored with each record laid out end-to-end. The blocking factor is not relevant.
Now to the performance question. The answer is, (drum roll please) It depends.
No, really, it does! For buffered and NOBUF access, it makes no real difference. For MR NOBUF, it can make a difference. The idea behind MR NOBUF is to retrieve multiple blocks of data in a single system call, thereby saving on the overhead associated with multiple calls to FREAD. This is offset by the fact that the file system is doing extra work to pad the data being returned, so that it looks as if the data had been stored in blocks. So the larger the number of blocks read (in your MR NOBUF read), and the less efficient the blocking factor, the more overhead that you have per each byte of real data moved.
So in the one case of MR NOBUF access, it is best to try and specify a blocking factor which allows the least amount of wasted space per block.
Copyright The 3000 NewsWire. All rights reserved.