Click here for WRQ Sponsor Message

Net.digest

Net.digest summarizes significant discussions on the HP 3000-L Internet mailing list. The column is edited by longtime HP 3000 columnist John Burke, who provides commentary on HP 3000 issues. Advice is provided on a best effort, Good Samaritan basis. Test these concepts for yourself before applying them on your systems.

Analysis by John Burke

Disk free space, how much is enough?
I suppose the correct answer is you can never have too much. But since we live in the real world of budgets and limited resources, so we need to determine the minimum free space percentage that should be on the system volume set in order to maintain system performance (disregarding other factors).Opinions range from 20 to 50 percent.

Well-known performance expert Bill Lancaster replied that

"There really isn't a pat answer. It depends on how dynamic your disk environment is. You should have enough free space so that:

1) you never run out;
2) you have plenty of space for transient (virtual) space;
3) you have enough space so that when your application(s) create temporary and work files, the system can do so with a minimum of undesirable effects; and,
4) when you expand files such as IMAGE/SQL datasets you will have enough room to ultimately place the extents where you want them.

Twenty percent is a pretty good starting point as a rule of thumb; but what you should do is watch your disk environment, get to know it ,and tune accordingly."

List meister Jeff Kell added
"'Free' space is somewhat intuitive. You need enough so that you don't run out, bearing in mind transient space, spool space, sort work areas and temporary files. If you have user volume sets, monitor the values on the individual sets, particularly mpexl_system_volume_set, which is where all of your transient and spool space will go.

"A less-intuitive factor is disk fragmentation. It is a big issue on MPE/iX now that there are no strict limits on the number of disc extents. It is a major issue on the system volume set on 5.0/5.5, especially if you are using any Posix applications (Samba, httpd, Apache, even 'official' products like Open Market and inetd). You can determine fragmentation from ':discfree a' by looking at the number of free space regions and the distribution by size, notably the max contiguous area. Anything less than 4K sectors on the system volume set can cause trouble (though there's a patch in the works).

"Also on MPE/iX 5.0 and MPE/iX 5.5, there are special considerations for LDEV 1 if you have only a few volumes in the set. As of 5.0, the system enforces an implicit limit of 50 percent capacity on LDEV 1 regardless of your percentage allocations in :volutil. It will not allocate space on LDEV 1 beyond 50 percent unless there is no other alternative. This aggravates fragmentation of the non-LDEV-1 system volumes and can drastically unbalance the allocation of transient space (paging areas) across the volume set. If you are memory constrained (who isn't, given the right application mix?) this can bottleneck paging I/O unless you have, say, four to five spindles in the system volume set. In rare cases, this can be a blessing since system pages typically occur on LDEV 1 (allocated during boot-up, plus swapping from NL/XL/SL.PUB.SYS. If that drive is full, user stacks get allocated on other volumes. But on medium-to-large systems this is not the usual case.

"User volumes are relatively immune to 'percentage' of free space, given that fragmentation is low as only job/session temporary files and of course permanent files are allocated there. The system volume set is much more dynamic and sensitive."

Memory, how much is enough?
In regard to memory on MPE/iX -- probably even more so than with disk free space -- the answer is you can never have too much. The exact question:

"Upstairs says they have some money, I say I need memory. Advice please, as to how much memory we should have on a 947 which averages 40 users and peaks at around 70. Applications are a mixture of CM with VPlus, Native Mode C programs and PowerBuilder C/S, all of which accessing TurboIMAGE databases."

Jim Kilgo reported that at the HP Performance Conference in Orlando during February attendees were presented with a formula for estimating optimal main memory:

MPE/iX Operating System 37MB
Network Software 18MB
Number of concurrent users x 5MB
Job Limit x 8MB
Total Disk Space x 1%
Total Optimal Main Memory (sum of the above)

For example:
MPE/iX Operating System 37MB
Network Software 18MB
85 users X 5MB 425MB
7 X 8MB Job Limit 56MB
24 GB Disk Space X 1% 246MB
Total Optimal Main Memory 782MB

An even easier method to estimate memory, courtesy of Stan Sieler:

Money_available / price_per_MB = Amount of MB to buy

Seriously though, Stan went on to discuss optimal memory sizing for MPE/iX:

"I'd ask two questions:

"1) How much spare CPU is available?

"If there's a lot, but you feel performance is slow, then you might be memory bound. If there's little CPU available, then adding more memory probably won't gain much performance (except for a small amount of savings in the slightly reduced amount of CPU used for page fault handling) because there isn't spare CPU horsepower available to take advantage of the extra memory.

"You can check CPU busy for free by going to the hardware console (LDEV 20) and pressing Control-B, so you can see the status line. Note that the left edge alternates between FFFF and FxFF. That 'x' times 10 is how busy your CPU was for the last second or so. Thus, F3FF means your CPU was 30 percent busy.

"2) How is your memory being used now?

"You can get the free RAMUSAGE utility from www.allegro.com/software/ or directly at www.allegro.com/software/stuff/RAMUSAGE.LZW

"It will tell you how your memory is used by 'category', and tell you how much more 'user' memory would be available if you added more memory. For example:

RAMUSAGE [2.29] - LPS Toolbox [A.01j] (c) 1995 Lund Performance
Solutions
SERIES 968RX
MPE/iX 5.0
#CPUS: 1
Memory size: 128 MB (134,217,728 bytes; 32,768 logical pages)

Memory usage by 'type' of Object Class:

Class # LogicalPages #MB % total
SYSTEM_CODE 5,784 22 17.7%
SYSTEM_DATA 11,428 44 34.9%
UNUSED 608 2 1.9%
USER_CODE 7,665 29 23.4%
USER_DATA 614 2 1.9%
USER_STACK 817 3 2.5%
USER_FILE 5,851 22 17.9%
Totals: 32,767 127 100.0%
'User' pages are 47.5 percent of memory (61 MB out of 128 MB)
If you added 32 MB, you'd have 1.5 times as much 'User' memory. (160 total MB) 64 MB, you'd have 2.0 times as much 'User' memory. (192 total MB)

This report is a small subset of the data provided by PAGES, one of the utilities in the Toolboxes from Lund Performance Solutions. Lund Performance Solutions can be reached at 503.926-3800. See our January issue for a Test Drive of Toolboxes.

SORTLIB: Insufficient stack space!?
There is a tendency among many of us who labored for years with the memory and stack restrictions of good old 16-bit MPE to believe that with MPE/iX, memory is never a problem for an application. It comes as a shock when we run into limits, but it gives give those of us who have been around for a while a chance to reminisce about old tried and-true techniques that were developed in the era when every byte of disk and RAM was precious.

Here's the set up: "How much stack space is necessary to sort a rather large file with an extremely large record length as denoted in the screen shot below. SORT.PUB.SYS and of course, COBOL SORT (which uses SORTLIB) both abort due to the lack of Stack Space."

The details:

HP31900A.02.00 TurboSORT TUE, JAN 21, 1997, 3:04 PM (C)
HEWLETT-PACKARD CO. 1987

>input claimbas.mtfdata.testmtf
>output clmsort
>key 1,12
>end
SORTLIB: INSUFFICIENT STACK SPACE

Program terminated in an error state. (CIERR 976)

:listf claimbas.mtfdata.testmtf,2
ACCOUNT= TESTMTF GROUP= MTFDATA

FILENAME CODE ------------LOGICAL RECORD----------- --SPACE---
SIZE TYP EOF LIMIT R/B SECTORS #X

CLAIMBAS 9367B FA 306930 350000 3 11231232 *

Several people commented that they thought SORT has a record size limit of either 4096 or 8192 bytes and that this is probably causing the problem.

Neil Harvey suggested a blast from the past technique: creating a KSAM file with the correct index, copying the file into it and then copying the file back out correctly sorted. Might be a tad slow, but it should work.

Several people pointed out that SORT will optionally output a file with just the key values and the relative record number. Using the output file from SORT as input to a small program that does direct reads against the original file, you could then construct the desired sorted file.

In case SORT just cannot read of the size in the example, Duane Percox suggested:

"1. Write a small COBOL program that reads the input file and creates an output file that has two fields in it. The 12 byte key and a 32-bit record number (of the input file, sequentially assigned);

"2. Sort the new, shorter file on the 12 byte key; and,

"3. Write another small COBOL program that reads the sorted file and then uses the assigned record number to read the actual data record and create a new output file which will now be sorted correctly."

If you have access to Suprtool from Robelle, you have another option. Suprtool can handle larger records. See our February issue for a Test Drive of Suprtool.