Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
.
Journal for the Users of the Burroughs 6700
Number:
Contents:
2, 1973 November
From the editor and secretary page
Technical contributions
K.L. Bowles, Maximizing B6700 throughput — a proposal
K.J.E. Lewis, New option for system/duxnpanalyzer
David Perlman, A joint documentation effort
Jack M. Hughes, Queen’s University Program Library System “
Book Review:
Computer System Organization the B5700/B6700 Series I’ 39
2
19
22
27
Mailing List 41
JUB 2-1
From the editor.
We are happy to say that we have received many messages of sympathy
in reaction to the first JUB6700-issue.
Again I should like to underline that JUB6700 is a technical journal
of which at the moment the Eindhoven University of Technology appears
as the edi tor ••
When the journal proves its viability an editonal board should take
over the editing.
Many persons promised a paper (for example: the interface for a
plotter, comparison between Burroughs Extended Algol and Pascal)
and I should like to receive those.
Furthermore I request each B6700-user to reconsider how far there is
potential knowledge valuable for other B6700-users.
From the secretary.
Forced by circumstances (increase of subscriptions, decrease of
finances) we will send to each B6700-installation only two copies
instead of three, but by putting the journal in your library (as
you do with other journals), this decision is probable not harmfull.
Perhaps a formal subscription is necessary in the future.
For corrections and additions in the mailing list I refer to page 41.
f No • M
I •
c:.cr-... ~c.~ 0(.
1.1 ... Aw....
MAXIMIZING B6700 THROUGHPUT - A PROPOSAL
by K.L. Bowles
JUB 2-2
(San Diego: Computer Center; La Jolla, California 92037-USA).
KEYWORDS: WORKFLOW MANAGEMENT, CORE MANAGEMENT, WORKINGSET, SWAPPER,
JOB SUSPENSION, JOB QUEUES, MCP OVERHEAD
SUMMARY
Any installation which must mUltiprogram a stream of many small tasks
(job steps), along with a background of larger production runs, must
cope with a serious drain on efficiency which results from the basic
B6700 core memory management strategy. The present (II.3 level) working
set control provides a measurable enhancement of throughput for the longer
background tasks running alone, i.e. without the stream of small tasks.
With or without workingset control, there is a very high overhead cost
associated with starting or terminating tasks, or with task suspension.
Burroughs now provides a core memory "swapper" intended to make the
B6700 efficient in timesharing applications. The central argument of
this proposal is that the swapper should be extended to provide two
level core management allowing maximum throughput for a mix of short
and long batch jobs.
DESCRIPTION OF THE PROBLEM
The problem described here is one faced by any B6700 installation which
attempts to multiprogram a large number of short to medium duration jobs,
along with a background of medium to long jobs.
The problem is considerably worse at installations attempting to mix
interactive remote users with the batch background, except in situations
where all remote users interact with a single interactive program capable
of handling mUltiple stations. The problem arises primarily because of the
Input/Output and processing overhead associated with core memory management
when a task is started or stopped, or when a task changes from active to
inactive status or vice versa.
JUB 2-3
Most of the descriptive material presented here is related directly
to the B6700 operation at UCSD. While the numbers given for items
such as core utilization are specific to UCSD, the general concepts
will apply to any B6700 where many short independent tasks must be
processed. In the context of the B6700, an independent task is any
program running under its own separately assigned stack (Le. "mix")
number. At UCSD a JOB is a related group of tasks, run in some sequence
by a single user. This concept will become commonplace in the B6700
world once the 11.4 level system is released by Burroughs. UCSD runs
an average of about 1800 JOBs each day., corresponding to about 6000
tasks each day. More than one half of all jobs consume less than 10
seconds of processor time each. On the other hand, there is almost
always a queue of longer jobs waiting to run, many of them requiring
several tens of minutes of processor time. UCSD runs a supervisory
program which attempts to manage the flow of work in such a way as
to maximize total throughput while honoring the announced priority
commitments. Similar supervisory functions will become available to
all B6700 users with the 11.4 level release from Burroughs.
Every job run at UCSD is associated with one of five standard
priority QUEUEs designated Ql, Q2, ••• , Q5. Ql operates with the
highest priority assigned to users, being the "queue" used for
interactive programs. Each of the batch queues, Q2, ••• , Q5 has an
associated turnaround time commitment and a corresponding price
schedule. Q2users get turnaround shorter than 2 minutes, Q3 shorter
than 20 minutes, Q4 generally less than 200 minutes. Q5, being the
remainder, ranges from 8 hours to overnight. An essential factor
allowing turnaround commitments to be honored for Q2, Q3 and Q4 is
that each queue has an associated maximum limit parameter for processor
time, I/O time, and lines printed. Though QI (interactive) users
operate in a mode differing from the batch concept, they do run many
independent tasks under control of CANDE. UCSD is using the 11.3 level
SWAPPER for as many of these tasks as the 11.3 software will permit.
Naturally the response time goal for Ql tasks is only a few seconds.
JUB 2-4
It is characteristic of a mUltiple-use environment like the one just
described that short tasks can only consume a relatively small propor
tion of the total processing resources. All general purpose computer
centers experience a mixture of job durations similar to that observed
at UCSD. While there are differences among general purpose installations, •
those differences result mainly from differences in scheduling policies.
The characteristic of many small tasks mixed with a few large tasks is
found at all general purpose centers. Any general purpose B6700 center
which offers a response or turnaround time shorter than about 30 minutes
will experience queueing and scheduling problems similar to those at
UCSD. In order to keep the processing resources busy, it will be necessary
to mix short and long tasks in the machine at the same time. The medium
and· long tasks will necessarily receive the great majority of the produc
tive resources (processor time, I/O time, core memory word-seconds). At
UCSD, Q3, Q4, Q5 tasks together consume about 75 percent of the productive
(billable) processor time, and similar amounts of I/O time and core. On
the other hand the short tasks account for at least 75 percent of the
overhead activities of the MCP and the Message Control Systems. To keep
things in context, 62 percent of the overhead processor time is consumed
by the MCP, while all MCS's together consume 38 percent.
As an objective, UCSD seeks to deliver at least 40 percent of all processor
time to users in productive time. In other words, at saturation no more
than 60 percent of the time should be consumed with the processors working
on overhead functions or idle. The goal is to be reached during periods
when there is constantly a backlog of work waiting in the input queues.
Current experience differs dramatically from this goal, thus accounting
for the effort to improve throughput.
Fortunately,for analysis purposes, the mix of work at UCSD is dependent
on time of day. Virtually all short jobs are processed in the Prime Shift
between 9AM and 10PM, while many of the long Q5 jobs are held to be run
after 10PM in the Night Shift. During the Prime Shift it rarely is possible
to deliver more than 20rpercent of all processor time of the users.
JUB 2-5
During Night Shift it often is possible to deliver 60 to 70 percent of
all processor time to the users. The difference is primarily associated
with MCP overhead activities which cannot be billed to the users, or
are not billed to them because of overriding policy considerations. In
general UCSD attempts to avoid billing one user for overhead processing
caused by tasks of other users. Without this limitation, one user run
ning the identical task at different times might see his costs per run
fluctuating over a ten-to-one range. To provide a basis for comparison
with other B6700 systems, the following characteristics of the UCSD
system may be useful:
2 Central Processors, Model II (5/5 MHz)
2 I/O Processors with a total of 10 channels
10 Modules of 23 Ms. disk (200 M bytes)
3 Disk Electronic Units, 4 Peripheral controls, 1 exchange
15 Modules of 1.2 microsecond core (246K)
8 Tape Drives 96KB (5-9 track, 3-7 track)
Datacom Processor (plus spare)
Neither tape nor datacommunications activity is sufficiently heavy to be
a major drain on the I/O processor or its associated resources, except
during brief controllable intervals.
CORE MANAGEMENT INFLUENCE ON THROUGHPUT
Limitations of the standard B6700 approach to core memory management account
for almost all of the difference between the 40 percent throughput goal and
20 percent actual experience at UCSD. The explanation is a complex inter
play between the handling of overlay able (OLAY) and non-over1ayab1e (SAVE)
core, involving differences between code segments and array rows, and so
on. SAVE core is allocated for task stacks, I/O buffers, disk file headers,
File Information Blocks, Task arrays, MCP tables, array dope vectors, the
Message Pool, SORT areas I and other assorted purposes. UCSD also allocates
19800 words of SAVE space currently for swappab1e tasks, of which there are
typically about 10 in the mix at one time. OLAY core is primarily used for
code segments and array rows.
JUB 2-6
At UCSD, 40 to 50 percent of the total core occupied by an active
task is generally allocated to SAVE core. This does not include
space for disk file headers which are not charged to the user by
the 11.3 MeP. The other 50 to 60 percent of core is overlayable,
with wide fluctuations in the ratio of code to array-row space.
Code segments average about 175 words long. However, this average •
is dominated by MeP and MeS code segments which typically occupy 5
times as much memory as do the combined total of code segments of
user programs. The MCP and CANOE together account for about 85 per
cent to the overhead code space generally found in core. It is
probable that the average lenght of code segments compiled by users
at UCSD is smaller than the overall 175 word average. Overlayable
array rows average over an hour from 150 to 250 words long, but
this average tends to fluctuate more widely than- the average lenght
of code segments.
At UCSD there is relatively little distinction between Q2 jobs and
Q5 jobs (or those in the in-between queues) regarding total core
occupied by a job. Since workingset control is in use, the core mea
surements for any particular program running with similar input data
tend to remain relatively stable. The typical user task occupies
about 10K words of core total, of which 5K is SAVE and SK is OLAY.
Most compiler runs occupy from 20 to 25K total, of which about 10K
is SAVE. A small minority of user tasks occupy more than 2SK. As
might be expected, those that do occupy more than 25K tend to be
those which run for substantial periods of time. A majority of the
jobs run on the B6700 are of the compile-go type.
The core management problem arises because UCSD must control its bud
get for use of core as much as possible. The need for control can be
appreciated from the following table:
JUB 2-7
TABLE 1
UCSD B6700 CORE MEMORY BUDGET
ITEM SAVE OLAY CUM TOTAL
MCP Stack & Tables 7K 7K
MCP Code 7K 30K-SOK 44K-64K
Independent Runners 6K SOK-70K
DCP SAVE 7K S7K-77K
MCP Misc OLAY 4K-lOK 61K-87K
Message Pool 9K-13K 70K-IOOK
F~le Headers 4K-IOK 74K-II0K
CANDE 12K 10K-23K 96K-145K
MCS's 9K;..lSK 11K-13K 116K-173K
SWAP SPACE 20K 136K-193K
Available Poo 1 Minimum (10K) 146K-203K
Code for Swappable Tasks OK-10K 146K-213K
81K-97K 6SK-116K 146K-213K
Allocatable
(including Headers and Code for swappable tasks) S3K-I04K
Where a range of figures is given. the low figure applies generally
to the case when only stable long-running jobs are in the mix. The
high figure in. each range is typical of UCSD Prime Shift operation.
It may be seen that UCSD can allocate about tOOK to users during the
night shift. During Prime shift. only about SOK is available to be
allocated to user tasks, including space for disk file headers and
fbr code in swappable tasks. Occasionally it is necessary to run an
interactive (Ql) task wqich is not eligible for swapping because it
contains a SORT, uses a compiler, uses an independent PROCESS, or
because it must refer to tape. The space taken up by any such Ql tasks
must also come out of the totals just mentioned. Since the overhead
portion of all this core is so large, UCSD obviously has mounted a
campaign to reduce the ~verhead.
JUB 2-8
A word of explanation is required regarding the Available Pool
Minimum in the above table. In general it is observed that the
B6700 slows down measurably because of core-disk thrashing when
the net core space on the available list falls below about 10K.
On the B6700, there is no standard page size, and all segments
are allocated just the required amount of space. This leads to
"checkerboarding", i.e. the existence of m'any small available
spaces between areas that are in-use, where most of the available
spaces are too small to be allocated. Most of the 10K or so of
necessary available space is needed to provide for this checker
boarding. It is remarkable that the B6700 does not require a
larger total of available space for checkerboarding to run
reasonably efficiently. When all running user tasks are long (as
during night shift) and when all have been allocated at least
their workingset amount of core, the B6700 can deliver from 60
to 70 percent of the processor time to user tasks if at least
10K of available core remains.
Scheduling during prime shift poses a much more difficult problem.
UCSD is currently controlling the number of user tasks allowed to , run in the mix at one time in an attempt to maximize throughput.
The crudeness of this control can be appreciated from the following:
Two compilers running concurrently will consume about 50K. Recog
nizing this, there is a temPtation for the UCSD Computer Center to
avoid scheduling more than two Q2, ••• , Q5 tasks at one time. Such
a limit would make it all but impossible to maintain both fast
turnaround and production queues. In addition, it is generally
observed that two concurrent user tasks cannot use up nearly 40
percent of the available processor time on two processors, since
each is likely to spend a significant portion of its time waiting i
for completions of requested I/O actions. Therefore, UCSD finds it
necessary to allow 3 or 4 Q2, ••• , Q5 jobs in the mix at one time.
This poses no problem when all of the tasks are of the "typical"
10K occupancy type.
JUB 2-9
The problems arise when one completes, and the next in line is a
compile or some other task in the 25K or greater category. Once
the new task has entered the mix, and collected some fraction of
its intended workingset, the MCP finds it necessary to SUSPEND
one or more tasks in order to prevent the remainder of the mix
from thrashing.
Both the act· of suspension, and the act of introducing the compiler
or other larger task into the mix, are expensive in terms of through
put. The cost can be appreciated from the statistics cited earlier.
A compiler requires some 70 segments of code to be in core to
operate, as well as perhaps 50 to 60 OLAY segments. The I/O channel
time required for one PBESENCEBIT disk-core transfer is about 37
milliseconds (on 23 Ms disk), while the elapsed time averages close
to 60 milliseconds when the system is lightly loaded. As a result
of this, the elapsed delay from begin-job (BOJ) time until a com
piler starts operating productively ranges from 5 to 10 seconds
even on a lightly loaded system. During this process, the disk-I/O
resources of the system are subjected to a heavy load. Consequently
other tasks in the mix at the same time suffer queueing delays in
completing their I/O functions. Between the I/O queueing, the fact
that there is not space available to allocate the several other tasks
concurrently, the effect of starting up a large program like a
compiler is to cause a h~atus in productive throughput ranging from
5 to 10 seconds on the system as a whole.
When the MCP finds it ne~essary to SUSPEND a ~nning task, the cost
may be even greater. While a task is suspended, many of its overlayable
segments become overlaid in the course of normal core management
activity. Indeed one of the purposes of the suspension is to allow the
MCP to replace the overlayable space of the suspended task with space
needed for other higher priority tasks. Both processor time and I/O
time are costly while the needed space is being acquired. UCSD measure
ments show that the MCP spends an average of about 30 milliseconds of
processor time in handling each segment overlaid, far more than the
total processor time taken by the MCP for all other functions.
JUB 2-10
Moreover, each segment overlaid out to disk requires an average of
about 37 milliseconds of I/O channel time and a minimum of about
60 milliseconds elapsed time. This time is matched, on the average,
by the PRESENCEBIT I/O time needed to bring in the segment from
disk for which the space has been acquired. Thus, each overlay
action for array row segments requires about 74 milliseconds of
I/O channel ~ime, and at least 0.120 seconds of elapsed time.
Fortunately, it is on~y necessary to read code segments from disk
as they may not be modified by running programs.
Clearly the cost of a task suspension depends upon the lenght of
time the task remains suspended, as this will determine the total
amount of core overlaid. No good statistics on this length of time
have yet been obtained at UCSD. However, it has been observed that
the total loss of overlayable core by all suspended tasks often
ranges from 10K to 20K. This is usually accompanied by a total of
50 to 100 round-trip overlay actions initiated by the MCP's
GETSPACE procedure. Based on the average sizes of array-row segments,
the two observations are consistent. The resulting hiatus in pro
ductive processing will be at least 6 to 12 seconds based on these
overlays. Frequently the hiatus will be caused by a sudden short-term
rise in MCP core requirements. For example certain types of I/O or
processing errors trigger an asynchronous MCP logging procedure which
requires its own stack, and many copies of these procedures may
suddenly appear concurrently. It is debatable whether the sudden fluc
tuations in MCP core requirements are really necessary. However, the
point for this discussion is that they are a common cause of task
suspensions, and task suspensions are expensive because of the time
delay and processing overhead connected with the many overlays required.
At UCSD, the number of task suspensions during the Prime Shift appears
to be comparable to the number of jobs processed. Between the two it
may therefore be estimated that the Center loses roughly 30 seconds
out of every minute from productive processing for the reasons cited.
JUB2-11
This estimate results from a load of about 2.5 JOBs per minute
(i.e. about 8 tasks per minute), and approximately one task
suspension per minute on the average. This estimate corresponds
to the maximum of about 20 percent of processor time deliverable
to the users during these shifts.
PROPOSED TWO-LEVEL CORE MANAGEMENT
The situation just described results from the fact that gross
changes in total core requirements take place on the B6700 over
a time interval too short for the standard mUlti-segment core
management scheme to respond. This is a familiar problem in the
timesharing application, where the user typically performs very
short computations at widely spaced intervals. On the B5500
Burroughs has long used a very successful two-level core manage
ment scheme for timesharing in which the core belonging to an
inactive user is "rolled-out" en-masse in a single I/O transfer
to disk. When the user again requests a short computation, his
information is rolled-in to core from disk, again en-masse. The
B6700 provides a similar facility in the 11.3 level SWAPPER,
understood to be more fully implemented in the 11.4 system. This
proposal argues that the same basic SWAPPER should also be used
to maximize throughput for a mix of batch jobs, with or without
the presence of timesharing users.
There are two basic benefits to be derived from the roll-in/roll-out
mechanism of the SWAPPER:
a. Most of the SAVE core belonging to a suspended task is moved
out to disk. This means that practically all of the core
allocated to an inactive task becomes free to allocate to
another task.
JUB 2-12
b. The act of suspension can be reduced from many seconds to a
fraction of a second of elapsed time. The transfer rate of
the 23 Ms. disk at UCSD is about 67K words per second. Thus,
a 25K task can be rolled-out in 0.37 second. Moreover the
MCP bookkeeping operation is simplified when all segments
of a task are overlaid en-masse, and this gives some hope
that the "large MCP processing overhead might also be reduced
materially.
Whereas a "timeslicing" strategy is appropriate for timesharing
users, it probably is not desirable for handling a mixture
consisting only of batch jobs. Timeslicing implies that a task
will be allocated a maximum quantum of time for each active
tour through core. Upon exceeding that quantum, the task is
suspended and rolled-out to disk in order that other users
queued for processing may receive a share of the total pro-
cessing time. In a typical installation, the quantum is made
a fraction of a second, in order that the timesharing system
will deliver reasonable response time to users. This time is
comparable to the roll-in or roll-out time. Since large numbers
of users cannot be accommodated with full workingsets in core at
one time, the roll-in and roll-out cannot be fully overlapped
with processing by active tasks. Therefore the SWAPping overhead
is high for timeslicing with a quantum of a fraction of a second.
For a mixture of batch jobs like the Q2, ••• , Q5 mix at UCSD,
there is no inherent need to times lice. Instead the interrupts
normally encountered when tasks arrive in the input queue, or
when tasks become temporarily inactive, should suffice to trigger
the swapping activity. On the other hand, timesharing users can
occupy the same Swap Space as the batch users, with timeslicing
being used only when necessary to accommodate many concurrent
timesharing users. The conditions leading to timeslicing would nor
mally correspond to conditions when all Q2, ••• , Q5 tasks were
rolled-out temporarily.
JUB 2-13
With the SWAP space being conceived as useful for most user
tasks on a mixed batch and interactive system, it may be
desirable to consider making the size of the total SWAP area
flexible. At UCSD it would be desirable to allocate perhaps
lOOK to the SWAP area for practically all user tasks. This
would allow 146K for the MCP and overhead functions. At times
of particularly intense MCP activity, the MCP might thrash
with this limitation - a condition which would lead to reduced
throughput. However, the MCP could shave needed core off either
end of the SWAP area temporarily, by first assuring that the
resident tasks had been rolled-out. Similarly, if the total
available pool of core had remained well above 10K for perhaps
10 seconds or more, the MCP could augment the size of the SWAP
area to allow more user tasks to be active at once.
WORKINGSET COMPARISON
In view of this discussion, it is desirable to review the concept
of the workingset. It remains true that a task running in the SWAP
space requires its workingset of core to run efficiently. In the
current 11.3 level SWAPPER at UCSD, it has been observed that tasks
attempting to operate within less than their correct workingset do
thrash and cause excessive overlay action by the MCP GETSPACE pro
cedure. Since there is no automatic workingset control for these
tasks, users are advised to use a
?CORE-nnnn
control card when initiating a task in order to request more than
enough core to prevent thrashing. This limitation is really not
necessary as an adjunct of the SWAPPER. Each time the SWAPPER
rolls-in a task it must arrange that enough core be available in
a contiguous block for the task to become core-resident. If the
MCP observes more than some threshold number of getspace actions
in one second of processor time used by a task, it may request a
larger area the next time the task is rolled-in.
JUB 2-14
Similarly the size of a tasks SWAP area may be reduced if the
overlay activity is too small. From the experiments performed
at UCSD by this writer, it seems that minimum costs are asso
ciated with an overlay rate ranging from 2 to 5 per processor
second used by a task. This method of workingset control was
abandoned by Burroughs in the current system in favor of a
somewhat simpler system based on overlaying all of core at an
average target rate. The current method would not provide
adequate information to allow the size of the SWAP space allo
cated to a task to be adjusted dynamically.
It should be emphasized that Denning's workingset principle is
really applicable to the B6700 operating under "steady-state"
conditions, i.e. when running several long duration tasks and
no short duration tasks. The workingset concept describes an
ideal balance between processing and overlay activity on a
virtual memory system. That ideal balance implies that the
processors can be kept busy in the presence of a modest back
ground of I/O activity needed to make the segments of information
required in processing present only when actually needed. On the
B6700 that balance implies that a task should be able to perform
at least 0.2 seconds. of processing between interruptions for
overlays. A higher overlay rate implies that the processor is
either idle or busy managing the overlays. A lower overlay rate
implies that too much core is allocated to the task. Since a
typical task on the B6700 uses from 30 to 300 independently
overlayable segments of information, it follows that any signi
ficant change in the core occupied by a task must consume many
seconds if the workingset balance is not to be upset. We have
seen that the elapsed time for such a change to take place must be
in the neighbourhood of 5 to 10 seconds minimum, where the
minimum implies that no other tasks are undergoing significant
changes at the same time.
JUB 2-15
With three tasks running, the characteristic time for significant
change should be 15 to 30 seconds to stay within the workingset
balance.
Of course the steady-state concept breaks down if a system must
be used to process a large number of tasks whose duration is
short compared with the characteristic time for a significant •
workingset change. This is in fact true at most general purpose
computer centers, where more than half of all tasks consume no
more than about 5 seconds of processor time. Another similar
fa"ctor is the influence of the changing core requirements of
the MCP itself. The MCP requires additional space, over steady
state, when starting or terminating a task, when opening or
closing a file, when "coping with an 110 error, and so on. The
duration of processing for these MCP operations is generally
only a fraction of a second, yet the overlayable core space
required will often be in the 5K-IOK range for each such
operation. MCP "tasks" of this type are normally spaced by
many seconds with a mixture of long-duration user tasks, and
the 15-30 second workingset characteristic time need not be
compromised. However when the average separation between user
tasks is only 5-10 seconds, the average separation between these
MCP functions will average in the same range, if not less.
The workingset mechanism is not designed to cope with changes
like this that are short compared to the characteristic time.
In so far as the changes are predictable in advance, the changes
in core occupied should be controlled in a deterministic way
rather than depending upon the random balance of the workingset
mechanism.
It should be emphasized that the foregoing discussion applies
equally well to the B6700 with or without the workingset SHERRIFF
turned on. With the SHERRIFF, the steady state runs more effi
ciently by maintaining a pool of available space for quick
reaction to PRESENCEBIT requests.
JUB 2-16
Without the SHERRIFF, a similar steady state develops in which
all active tasks contend with each other for the total space in
use.
REDUCING COMPILER START-UP OVERHEAD
In addition to the expense associated with overlaying during
task suspensions, we have also mentioned the night starting
overhead to get large programs, particularly compilers, into
operation. Using the proposed SWAPPER concept for all user
jobs, the starting overhead for popular large programs like
compilers could be drastically reduced. In practice, each such
program would have an associated "bucket ll of code segments
waiting to be rolled-in en-masse when the program is started.
The bucket, stored either as part of the regular code file or
separately, could be made either dynamic or fixed when compiled.
The bucket would normally contain an approximation to the working
set of code segments normally needed by the program during its
first 5to 10 seconds of processing. This approximation could
be achieved dynamically by simple storing, in the bucket area
of the code file, those segments found to be present when the
large program or compiler terminates. Normal overlay activity
within the SWAP space would assure that the bucket would contain
the information most frequently needed, at least as an approxi
mation. The size of the bucket could be specified when the large
program is compiled, as it would be known normally through ob
servation of the workingset applicable to code segments (the Dl
workingset).
The strategy suggested here might well improve the compiler
efficiency for small jobs by a factor of 5 or better. This would
go a long way toward making the B6700 standard compilers competitive
with optimized compilers like WATFOR in typical university environ
ments.
JUB 2-17
SWAPPING FOR RSVP CONDITIONS
For general purpose service centers, the RSVP strategy employed
on the current B6700 system is a serious limitation. The most
obvious example is related to the user-opera tor-system inter
action needed when a tape is to be utilized. Whereas scheduled
data processing will allow an installation to give the operators
advance inseructions on which tapes to mount and make ready for
a program, advance warning is often not practical in the service
center environment. Therefore it often occurs that a user's job
will be introduced into the mix, whereupon the job requests that
a tape be attached. Even if the B6700 operator has received ad
vance notification that a tape is to be mounted, he must take
time to associate the MIX number in the RSVP message from the
system with the user's job instructions. In a multiprogrammed
installation, this situation can become very confusing, and
lead to operations delays of several minutes before the program
requesting the tape can be made active.
Since the MCP issues the RSVP request to the operators, it follows
that the MCP has all the information needed at RSVP time to roll
out the task which has thereby become inactive. When the operator
ready's the tape, or clears any other RSVP condition, the MCP can
then roll-in the task as soon as space is available at the corres
ponding priority. Considering the rate of tape usage at UCSD, this
change could result in a 20 percent improvement of maximum through
put, since other tasks could be made active in the interim and the
SAVE space occupied by the inactive task could be rolled-out.
IMPLEMENTATION LIMITATIONS
The extent to which Burroughs may have solved the problems mentioned
here, as of the 11.4 system release, is not known to this writer
at this time.
JUB 2-18
a.The SWAPPER cannot currently handle programs which use inter
process communication because of bookkeeping complications.
Problems would arise if an asynchronous process terminated
while the parent were separately rolled-out. Coroutines
cannot be swapped in practice at UCSD because of a file
security limitation. For the vast number of situations at a
service tomputer center, these problems could be circumvented
by regarding a user parent task and all of its siblings as a
single SWAPpable entity.
b. The SWAPPER cannot currently handle tasks which use the SORT
feature of the MCP. This is because the SORT refers to areas
of core allocated to the calling task. It is a serious limi
tation in a multiple queue environment, for the SORT is so
highly optimized that it grabs a major fraction of the system
resources regardless of the priority of the calling task.
Some provision should be made to allow a SORT to operate in
the SWAP space so that the SORT can be subjected to the same
work flow management as other tasks.
c. The SWAPPER cannot currently handle tasks which perform tape
1/0 operations in practice, because a task waiting for long
periods for a tape to be readied or positioned will tie up
the SWAP space while waiting. This would have to be solved
by extending the provisions for the RSVP condition mentioned
earlier.
d. The SWAPPER needs to be modified to allow a high priority task
to cause roll-out of a lower priority task. There should be an
inherent time delay for permitting this to happen in order to
prevent thrashing of the SWAPPER itself. At UCSD a Ql task
waiting to be rolled-in should be able to cause roll-out of
any Q2, ••• , Q5 task of the space is needed. However it would
be quite satisfactory for a Q2 task to wait for 10 to 30 seconds
before causing a roll-out interruption of a Q3, .... , Q5 task.
In fact each priority queue might well have an associated SWAP
interruption delay, in order to prevent unnecessary SWAPPER
traffic.
NEW OPTION FOR SYSTEM/DUMPANAlYZER
by K.J.E. Lewis
JUB 2-19
(Technical Support Group, Computer Services Department, Midland Bank Ltd.)
1. INTRODUCTION
The Midland Bank application programs are generally based on one FATHER
stack firing up a series of SONS. These SONS may in turn fire up' their
own SONS. When an error interrupt occurs for one of the processes it is
desirable to take an instant snapshot of the stack in error plus all its
ancestors and their segment dictionaries.
At first it was thought that PROGRAMDUMP should be modified but this
idea was rejected for the following reasons:
1. We need to halt all processors while doing this so that the FATHER
stack is not altered by other SONS while we are dumping.
The time then needed to unravel stacks while holding all the system
is greater than that needed for a memory dump.
2. Vital areas could be overlayed by the program dump code.
3. A full memory dump can be re-analysed later if more information is
required.
4. The DCALGOL construct MEMORYDUMP will allow programatic use of
"TAPEDUMP" as well as the operators being able to force it.
2. PATCHES TO SYSTEM/DUMPANALYZER
2.1. A new option "FAMILY II was implemented which unravelled the FATHER
stack(s) of a given MIX number together with all segment dictionary
stacks and areas, and suppressed the listing of irrelevant
information.
2.2. The MIX option was extended to output segment dictionary stacks.
3. SYNTAX OF FAMILY OPTION
< FAMILY OPTION > = FAMILY < MIX LIST >
< MIX LIST> = < MIX NUMBER >1< MIX LIST > < MIX NUMBER>
JUB 2-20
4. SEMANTICS
The FAMILY option enables the user to specify a particular MIX number
or series of MIX numbers. These numbers are stored in the array
DISASTERAREAS as usual but when the check for specific stack dumps is
made:
1. The appropriate segment dictionary stack is requested.
2. PROCES.8FAMILYLINK is accessed and if FATHER NEQ 0 then that stack
and its segment dictionary stack is requested.
3. Process 2 is repeated until such time as FATHER=O
If both FAMILY and MIX options are utilized in the OPTIONS deck then
the last in overwrites the first.
The FAMILY option can be used for a non-dependant stack when a short
analysis is required.
A copy of the patches to SYSTEM/DUMPANALYZER is included.
PATCH NUMBER 7
DUMPA 0006B KJEL:PRINTS SEG.DIC.STACKS & IMPLEMENTS FAMILY OPTION
6" FAMILY == If, L5, SDSTACK = M[X+J].SEGDICTF£, FATHERSTACK = M[X+12].[35:12]f, FAMDUMP , IF P2 = 8 "MIX" OR P2 == 8 "FAMILY" THEN % DUMP SPECIFIED STACKS
FAMDUMP := P2 == 8 "F"; IF FAMDUMP THEN GO HUH;
BEGIN
END
X := M[ARRAYSTACK+SNR].ADDRESSF; % DUMP SEG.DICT STACK SETREQUESTED(SDSTACK); IF FAMDUMP THEN WHILE X := FATHERS TACK NEQ O. DO BEGIN % CLIMB THE FAMILY TREE
END;
SETREQUESTED(X); X := M[ARRAYSTACK+X].ADDRESSF; SETREQUEST(SDSTACK);
IF NOT FAMDUMP THEN IF NOT (SYSTEMLOCKS IS 0 OR FAMDUMP) THEN
IF MCPNAMESAVAIL AND NOT FAMDUMP THEN IF NOT FAMDUMP THEN
IF LINKDUMP AND NOT FAMDUMP THEN BEGIN IF NOT FAMDUMP THEN
IF NOT FAMDUMP THEN BEGIN END; IF NOT FAMDUMP THEN BEGIN END; IF NOT FAMDUMP THEN BEGIN END;
~ 0':1
N I
N
A JOINT DOCUMENTATION EFFORT
by David Perlman
(San Diego: Computer Center; La Jolla, California 92037-USA).
JUB 2-22
1. We at UCSD have begun a composite document on the 11.4 MCP. As it
grows we hope to have a catalog of all significant variables, arrays •
and procedures as well as information useful in System Maintenance,
Dump Analysis and Bug Correction.
We invite you to participate by writing up and circulating any infor
mation you have. Toward this end I have enclosed a proposed documentation
format and a few samples of the kinds of things we are interested in
including in an alphabetized notebook. With many members contributing,
we shall all soon have a file of self-documenting and cross-referenced
notes.
We hope that you will join our effort and would appreciate hearing
your comments on this project. I am personally willing to act as coor
dinator, but the price of admission is an occasional contribution.
2. One of the contibuting factors to the notorious reticence of most pro
grammers with respect to documentation might be the frustration involved
in trying to write a complete description. In particular, the notion of
attempting to write a complete description of the B6700 Master Control
Program is quite untenable.
However,it is possible to generate a substantial collection of useful
information in a piecemeal fashion. One must simply overcome the feeling
that a document must be complete in order to be useful. A valuable,
albeit incomplete, description of any large system can be developed by
simply adding bits and pieces of information as they are discovered. In
time the data will begin to integrate.
JUB 2-23
One way to collect such a document is to file each addition alphabe
tically by subject, and if the practices outlined below are followed,
the file will tend to grow quite rapidly.
First, no information must ever be regarded as too insignificant for
inclusion. It is too easy to fail to document something important un
less everything is regarded as valuable, and it becomes good practice . to write down every new discovery.
Second, it is important that many people be involved. Often we carry
information that we do not commu~icate to others simply because we never
thought to do so. Thus, an entry by one person can elicit additional in
formation from another. In keeping with this idea, it is important that
the file be structured in such a way that additions under an existing
subject heading can be easily made. A loose-leaf notebook is a simple
scheme that works nicely.
Third, it is interesting to note that even erroneous entries are better
than none at all since they will ultimately evoke a correction.
Finally, an extensive cross-reference list for each entry is necessary
so that specific items can be easily located. In fact, the very act of
looking for a nonexistant subject heading indicates that that subject
should be added to the notebook if only to provide cross-references to
the proper subject headings.
A joint effort by the members of CUBE could produce an extremely power
ful tool for maintenance and debugging of the B6700 MCP.
(Remark of the editor: isn't it better to speak of a joint effort of the
6700-subgroups of CUBE and ABCU).
JUB 2-24
3. DOCUM.FNT PROTOTYPE.
This page describes what the format of the rest of the pages in the
notebook should look like. The requirements are as follows:
(a) Leave a fairly wide left margin so that the page can be bound into
a notebook.
(b) Put the title both at the top and in the right margin so that sub
jects ~an be easily located in the notebook.
(c) At the bottom of the text put in the date, your name, and your
installation in order to facilitate updating and correspondence.
(d) The form of the text can be whatever seems appropriate for the sub
ject matter. Furthermore, it need not be typed although hand written
entries should be in ink so that they will duplicate easily.
Entries need not be in any sense complete. Any information is better
than none, and additions can always be included.
Note that this page is itself in the form it describes.
Cross references should
be given where appropriate.
4. Examples.
4.1. ADDRESSF.
ADDRESSF is DEFINED at 01084000 to be [19:20J. It is extremely important
since it appears in all types of descriptors and most memory links.
7-23-73, Gyl Wagnon UCSD.
4.2. DISK FILE HEADERS.
Every disk file has an array of at least 15 words associated with
it known as the disk file header. The format and content of the
header is described in the SYSTEM MISCELLENEA (16 April 1973) in
section 4.4.2. (p. 4-25 thru 4-29).
JUB 2-25
A general note on how the system handles headers ---
File headers are always kept current, even on disk. When a file is
opened a copy of th~ header is read into core. If the file is open
then
1. the copy on disk contains the index into the array DISKFILE
HEADERS (COREINDEX) for the header in words 0 and 12.
2. the copy in aore has the disk address for the header in word
o and the CORE INDEX in word 12.
Note that during the process of updating the disk copy, the in-core
copy is not correct.
see DISKFILEHEADERS
DIRECTORYSEARCH
RELEASEHEADER
7-17-73, David Perlman UCSD.
4.3. FORGETCHECK(STKN).
FORGETCHECK is an MCP procedure located at 36880000 - 36902000.
It is called during the process of removeing a stack from the system.
The procedure searches through all of the in-use areas of memory looking
for areas belonging to stack STKN. If any are found, then a memory dump
is caused if NOCHECK is set.
When reading FORGET CHECK memory dumps, the address of the offending
area can be found at location (1,4) in FORGETCHECK's stack.
7-13-73, John Justice UCSD.
4.4. MEMORY LINKS.
Memory areas are surrounded by memory links. In-use areas have three
words in front (LINKA, LINKB, and LINKC) and one word in back (LINKZ).
Available areas have two words in front (AVAILA and AVAILB) and two
words in back (AVAILY and AVAILZ). Information regarding the various
JUB 2-26
links can be found under specific topics.
see LINKA
LINKB
LINKC
LINKZ
AVAILA
AVAILB
AVAILY
AVAILZ
GETSPACE
FORGET SPACE
MAKEPRESENTANDSAVE
TURNOVERLAYKEY.
4.5. MSGVECTORS.
7-13-73, Gyl Wagnon UCSD.
MSGVECTORS is a dope vector for the message pool.
Each row of the message pool is 512 words long. They are allocated
as needed and (hopefully) deallocated when no longer needed.
7-20-73, David Perlman UCSD.
The first word of each pool is a data descriptor (i.e. tag 5). The
address field points to the first word of the first available area,
which has a length given by the length field of the first word. The
first word of each available area has a format identical to that of
word 0, i.e., a data descriptor giving the address and length of the
next available area. Only the tag, length, and address fields are used.
If address and length fields are 0, this is the last available area.
Any in-use area is preceded by a tag 7 word whose MSGLGHF[32:9J gives
the length.
7-21-73, Kurt Barthelmess UCSD.
Queen's University Program Library System
by Jack M. Hughes
JUB 2-27
(Computing Centre. Queen's University, Kingston, Ontario, Canada)
1. BACKGROUND
Queen's University has been an active member of the Co-operative
Library Int~rest Group (CLING) formed under the auspices of the
Council of Ontario Universities to seek ways of improving the
contents of and facilitating access to software libraries at
the various member universities' computing centres. To encourage
interchange of software and related information, a list of
classifications and a classification coding scheme were developed.
2. THE CLASSIFICATION SCHEME
The 219 CLING classifications of software are divided among five
major categories - Service Programs, Statistics and Operations
Research, Academic Applications, Business Appli~ations, Numerical
Methods. Within each category, classifications are further divided
by class and sub-class. An eight-digit program number is used for
identifying individual program units. A single alphabetic character
identifies the category of the program, two-digit numbers identify
class and sub-class, and a three-digit serial number identifies an
individual program within its sub-class. Thus the program number
E.45.02.015 represents the program NROOT in sub-class 02 ("Eigen
values and Eigenvectors") within class 45 (Matrices, Vectors, Si
multaneous Linear Equations) within category E (Numerical Methods).
A list of the CLING classifications and their codes is appended to
this document.
3. JOINT CATALOGUE INPUT
Once all member universities have converted their program library
classification systems, CLING plans to compile and publish a joint
catalogue of library software.
To this end a standard input format has been designed for
entering data describing members' programs. Using a set of
from one to five punched cards or card images, each entry
to the catalogue includes the following information:
Program number (CLING)
Installation code
Machine ~ystem code - where an installation has more than
Program unit name
Source language code
one
Standard or non-standard source langu.age code
Unit type - main program, sub-program, macro, etc.
Availability - on-line, off-line, etc.
Confidence rating codes
Package code - IBM SSP, IMSL, etc.
Program description - up to 250 characters
4. QUEEN'S IMPLEMENTATION
Queen's University Computing Centre has implemented a program
library system based on the CLING scheme.
4.1. Input
JUB 2-28
Input to the Queen's system is prepared according to CLING
specifications and will also serve as input to the joint
catalogue. Data is punched on cards and made available to
users as an on-line disk file. All application programs,
system utility programs, compilers, APL functions and sub
routine packages maintained by the Computing Centre are
included. Program units maintained by user departments within
the university are in the process of being included.
4.2. Output
Output from the system is available to users in the following
forms.
4.2.1. Program Library Index
The Program Library Index includes all the information
contained on the main data file. It also includes an
introductory section explaining the system and listing
all the CLING classifications and codes. Program units
are listed in sequence by CLING classification. The
index-Pfoducing program, called LIBINDEX, is written in
ANS COBOL and has the facility of producing a selected
listing of only those programs selected according to
criteria expressed by the user on a control card.
4.2.2. LIBINDEX Documentation
This print-out includes the documentation necessary for
using the selective capability of LIBINDEX as described
in D.2.1. It also includes the complete list of CLING
classifications and codes. This is printed presently by
an IBM utility program, but an ANS COBOL program is
being written to replace it.
4.2.3. Program Library Keyword Index
This is a keyword index created from the program des
cr~ption text on the main data file. The listing includes
the keyword, the program numbers and names and the first
100 characters of the description. Full information about
the programs' availability must be obtained by the user
from the regular Program Library Index. The listing is
produced by a PL/I program, but certain editing functions
on the text are first carried out by a SNOBOL program.
4.2.4. Name - to - Number Cross-Reference List j
JUB 2-29
This lists all program entries in the data file in sequence
by program name and gives the programs' CLING numbers. It
is useful to those searching for particular program units
with whose name they are already familiar.
4.2.5. APL Output
APL library workspaces on Queen's B6700 were rearranged
to coincide with the CLING classifications of the func
tions they contain. Public library numbers are now the
same as CLING class numbers. Descriptions of all APL
library functions and some library workspaces are in
cluded i~ the main batch file so APL users can obtain
useful catalogues from the batch system. As well, there
are available through the APL system a list of CLING
classifications in which APL library functions are
available, a cross-reference list of old to new library/
workspace numbers/names for all library functions, and
a conversational search program which leads users through
the CLING classification system to locate APL functions
by their CLING classification.
4.2.6. Availability of Output to Batch Users
JUB 2-30
The Program Library Index, LIBINDEX documentation, Keyword
Index and Names-to~Numbers Cross-Reference List programs are
all on-line on the IBM/360 and may be executed by any regular
batch user. Conversion to the B6700 is now underway and is
expected to present no difficulty, except for the preliminary
SNOBOL phase of the keyword index system. Initially an inter
mediate tape file of keyword records will be made available
for listing from the B6700 and updated with a new tape from
the /360 system as required until a text-processing program
is written for the B6700.
Catalogued procedures are used so that the user normally can
produce any of the available lists with only one additional
card.
Binders containing all the available lists are made available
for reference at ~ll terminal sites on the campus and are
updated twice annually.
JUB 2-3]
5. AVAILABILITY OF THE SYSTEM TO OTHER CENTRES
Queen's would be happy to provide its program library system
to any other installation requiring it. The complete system
for the IBM/360-50 should be available for distribution by
the end of 1973, and for the B6700 (less parts of the keyword
portion of the system) by February 1974. Interested installations
who may wish to begin preparation of input data for the system
may obtain data coding information from Queen's Computing Centre
now. Distribution fees have not yet been determined, but will
reflect distribution costs only.
Inquiries should be directed to the writer:
Jack M. Hughes
Program Librarian
Computing Centre
Queen's University
Kingston, Ontario, Canada
Telephone: 613-547-3270.
QUEENS UNIVERSITY COMPUTING CENTRE - PROGRAM LIBRARY INDEX CLASSIFICATIONS AND CODES
CATEGORY A - SERVICE PROGRAMS
00.00 UTILITY (EXTERNAL) PROGRAMS - UNCLASSIFIED 00.01 UTILITY (EXTERNAL) PROGRAMS - MULTIPLE UTILITY 00.03 UTILITY (EXTERNAL) PROGRAMS - TAPE HANDLING 00.04 UTILITY (EXTERNAL) PROGRAMS - DISK HANDLING 00.05 UTILITY (EXTERNAL) PROGRAMS - DRUM AND DIRECT DATA DEVICES 00.06 UTILITY (EXTERNAL) PROGRAMS - GRAPHIC DISPLAY DEVICES
01.00 UTILITY (INTERNAL) PROGRAMS - UNCLASSIFIED 01.01 UTILITY (INTERNAL) PROGRAMS - LOADING 01.02 UTILITY (INTERNAL) PROGRAMS - CLEAR/RESET MEMORY 01.03 UTILITY (INTERNAL) PROGRAMS - CHECK SUM ACCUMULATION AND CORRECTION 01.04 UTILITY (INTERNAL) PROGRAMS - INTERNAL HOUSEKEEPING 01.05 UTILITY (INTERNAL) PROGRAMS - DUMP TO RELOAD/RESTORE OPERATIONS 01.06 UTILITY (INTERNAL) PROGRAMS - FILE ORGANIZATION 01.07 UTILITY (INTERNAL) PROGRAMS - SELF-CHECKING DIGIT 01.08 UTILITY (INTERNAL) PROGRAMS - PACKED DATA HANDLERS 01.09 UTILITY (INTERNAL) PROGRAMS - TIMING
02.00 DIAGNOSTICS - UNCLASSIFIED 02.01 DIAGNOSTICS - STATUS RECORDERS
03.00 PROGRAMMING SYSTEMS - UNCLASSIFIED 03.01 PROGRAMMING SYSTEMS - ASSEMBLERS 03.02 PROGRAMMING SYSTEMS - COMPILERS 03.03 PROGRAMMING SYSTEMS - INTERPRETIVE SYSTEMS 03.04 PROGRAMMING SYSTEMS - INPUT/OUTPUT CONTROL 03.05 PROGRAMMING SYSTEMS - REPORT GENERATORS 03.06 PROGRAMMING SYSTEMS - PREPROCESSING AND EDITING 03.07 PROGRAMMING SYSTEMS - MACROS AND MACRO GENERATORS 03.08 PROGRAMMING SYSTEMS - FUNCTIONS AND SUBROUTINES
04.00 TESTING AND DEBUGGING - UNCLASSIFIED 04.01 TESTING AND DEBUGGING - DUMPING 04.02 TESTING AND DEBUGGING - TRACING 04.03 TESTING AND DEBUGGING - TEST DATA PREPARATION
04.04 TESTING AND DEBUGGING - TESTING SYSTEMS 04.05 TESTING AND DEBUGGING - BREAK POINT PRINT 04.06 TESTING AND DEBUGGING - MEMORY VERIFICATION AND SEARCHING
05.00 EXECUTIVE ROUTINES - UNCLASSIFIED 05.01 EXECUTIVE ROUTINES - MONITOR 05.02 EXECUTIVE ROUTINES - SUPERVISOR 05.03 EXECUTIVE ROUTINES - DISASSEMBLY AND DERELATIVISING 05.04 EXECUTIVE ROUTINES - RELATIVISING 05.05 EXECUTIVE ROUTINES - RELOCATION
06.00 DATA HANDLING - UNCLASSIFIED 06.01 DATA HANDLING - SORTING 06.02 DATA HANDLING - MERGING 06.03 DATA HANDLING - DATA TRANSMISSION 06.04 DATA HANDLING - TABLE OPERATIONS 06.05 DATA HANDLING - CONVERSION AND/OR SCALING 06.06 DATA HANDLING - CHARACTER AND SYMBOL MANIPULATION 06.06 DATA HANDLING - INFORMATION CLASSIFICATION, STORAGE AND RETRIEVAL 06.08 DATA HANDLING - PROCESSING OF LIST TYPE DATA STRUCTURES
07.00 INPUT - UNCLASSIFIED 07.01 INPUT - BINARY 07.02 INPUT - OCTAL 07.03 INPUT - DECIMAL 07.04 INPUT - BCD (HOLLERITH) 07.05 INPUT - HEXADECIMAL 07.06 INPUT - COMPOSITE
08.00 OUTPUT - UNCLASSIFIED 08.01 OUTPUT - BINARY 08.02 OUTPUT - OCTAL 08.03 OUTPUT - DECIMAL 08.04 OUTPUT - BCD (HOLLERITH) 08.05 OUTPUT - HEXADECIMAL 08.06 OUTPUT - PLOTTING 08.07 OUTPUT - DISPLAY 08.08 OUTPUT - COMPOSITE
10.00 SYSTEMS ANALYSIS - UNCLASSIFIED 10.01 SYSTEMS ANALYSIS - NETWORK DESIGN 10.02 SYSTEMS ANALYSIS - FILE AND CORE REQUIREMENT 10.03 SYSTEMS ANALYSIS - SYSTEMS DESIGN 10.04 SYSTEMS ANALYSIS - CONFIGURATION
11.00 SIMULATION OF COMPUTERS AND COMPONENTS - UNCLASSIFIED 11.01 SIMULATION OF COMPUTERS AND COMPONENTS - COMPUTERS 11.02 SIMULATION OF COMPUTERS AND COMPONENTS - PERIPHERAL EQUIPMENT 11.03 SIMULATION OF COMPUTERS AND COMPONENTS - SYSTEM COMPONENT OR FEATURE 11.04 SIMULATION OF COMPUTERS AND COMPONENTS - PSEUDO-COMPUTER
12.00 CONVERSION OF PROGRAMS AND DATA - UNCLASSIFIED 12.01 CONVERSION OF PROGRAMS AND DATA - DATA CONVERSION 12.02 CONVERSION OF PROGRAMS AND DATA - COMPUTER LANGUAGE TRANSLATORS
30.00 DEMONSTRATIONS - UNCLASSIFIED 30.01 DEMONSTRATIONS - DISPLAY 30.02 DEMONSTRATIONS - PARTICIPATION
CATEGORY B - STATISTICS AND OPERATIONS RESEARCH
13.00 STATISTICAL - UNCLASSIFIED 13.01 STATISTICAL - DESCRIPTIVE 13.02 STATISTICAL - UNIVARIATE AND MULTIVARIATE PARAMETRIC 13.03 STATISTICAL - NON-PARAMETRIC 13.04 STATISTICAL - TIME SERIES AND AUTO CORRELATION 13.05 STATISTICAL - PROBABILITY DISTRIBUTION & RANDOM NUMBER GENERATORS 13.06 STATISTICAL - CORRELATION AND REGRESSION ANALYSIS 13.07 STATISTICAL - ANALYSIS OF VARIANCE AND COVARIANCE 13.08 STATISTICAL - SEQUENTIAL ANALYSIS 13.09 STATISTICAL - DISCRIMINANT ANALYSIS
15.00 MANAGEMENT SCIENCE AND O. R. - UNCLASSIFIED 15.01 MANAGEMENT SCIENCE AND O. R. - SIMULATIONS 15.02 MANAGEMENT SCIENCE AND O. R. - LINEAR PROGRAMMING 15.03 MANAGEMENT SCIENCE AND O. R. - NON-LINEAR PROGRAMMING 15.04 MANAGEMENT SCIENCE AND O. R. - SCHEDULING, CRITICAL PATH, PERT 15.05 MANAGEMENT SCIENCE AND O. R. - GAMES, GAME-LIKE MODELS, GAME THEORY
15.06 MANAGEMENT SCIENCE AND O. R. - GENERAL PROBLEM SOLVERS 15.07 MANAGEMENT SCIENCE AND O. R. - INVENTORY CONTROL
CATEGORY C - ACADEMIC APPLICATIONS
16.00 ENGINEERING - UNCLASSIFIED 16.01 ENGINEERING - AERONAUTICAL 16.02 ENGINEERING - CIVIL 16.03 ENGINEERING - CHEMICAL 16.04 ENGINEERING - ELECTRICAL 16.05 ENGINEERING - MECHANICAL AND HYDRAULIC 16.06 ENGINEERING - PETROLEUM 16.07 ENGINEERING - NUCLEAR 16.08 ENGINEERING - GENERAL
17.00 SCIENCES - UNCLASSIFIED 17.01 SCIENCES - GENERAL 17.02 SCIENCES - PHYSICS 17.03 SCIENCES - CHEMISTRY 17.04 SCIENCES - GEOLOGY, OCEANOGRAPHY, GEOPHYSICS AND GEOGRAPHY 17.05 SCIENCES - BIOLOGY· 17.06 SCIENCES - SOCIAL AND BEHAVIOURAL 17.07 SCIENCES - ASTRONOMY AND CELESTIAL NAVIGATION 17.08 SCIENCES - MATH. APPLIED MATH AND COMBINATORIAL ARITHMETIC
18.00 NUCLEAR CODES - UNCLASSIFIED
60.00 EDUCATION - UNCLASSIFIED
61.00 HUMANITIES - UNCLASSIFIED
CATEGORY D - BUSINESS APPLICATIONS
19.00 FINANCIAL - UNCLASSIFIED 19.01 FINANCIAL - INVESTING AND BORROWING 19.02 FINANCIAL - CAPITAL STOCK 19.03 FINANCIAL - TAXES 19.04 FINANCIAL - CASH CUSTODY AND FORECASTING 19.05 FINANCIAL - GENERAL ACCOUNTING 19.06 FINANCIAL - AUDITING 19.07 FINANCIAL - BANKING OPERATIONS
20.00 COST ACCOUNTING - UNCLASSIFIED 20.01 COST ACCOUNTING - MATERIAL ONLY 20.02 COST ACCOUNTING - LABOUR ONLY 20.03 COST ACCOUNTING - WORK IN PROGRESS
21.00 PAYROLL AND BENEFITS - UNCLASSIFIED 21.01 PAYROLL AND BENEFITS - PAYROLL 21.02 PAYROLL AND BENEFITS - EMPLOYEE BENEFITS 21.03 PAYROLL AND BENEFITS - PROFIT SHARING 21.04 PAYROLL AND BENEFITS - RETIREMENT 21.05 PAYROLL AND BENEFITS - INSURANCE 21.06 PAYROLL AND BENEFITS - CREDIT UNION
22.00 PERSONNEL - UNCLASSIFIED 22.01 PERSONNEL - RECRUITING AND HIRING 22.D2 PERSONNEL - INVENTORYING EMPLOYEES 22.03 PERSONNEL - TRAINING 22.04 PERSONNEL - PERFORMANCE REVIEW 22.05 PERSONNEL - ADMINISTERING WAGES AND SALARY
23.00 MANUFACTURING - UNCLASSIFIED 23.01 MANUFACTURING - SCHEDULING/LOADING 23.02 MANUFACTURING - JOB REPORTING 23.03 MANUFACTURING - BILL OF MATERIALS PROCESSORS 23.04 MANUFACTURING - NUMERICAL CONTROL 23.05 MANUFACTURING - CONTROL SYSTEMS
24.00 QUALITY ASSURANCE, RELIABILITY - UNCLASSIFIED 24.01 QUALITY ASSURANCE, RELIABILITY - TESTING 24.02 QUALITY ASSURANCE, RELIABILITY - PERFORMANCE ANALYSIS
25.00 INVENTORY - UNCLASSIFIED 25.01 INVENTORY - STOCKING AND ISSUING 25.02 INVENTORY - INVENTORY 25.03 INVENTORY - EQUIPMENT AND TOOL INVENTORY AND MAINTENANCE
26.00 PURCHASING - UNCLASSIFIED 26.01 PURCHASING - PREPARING PURCHASE ORDERS 26.02 PURCHASING- MATCHING INVOICES 26.03 PURCHASING - ACCOUNTS PAYABLE
26.04 PURCHASING - PURCHASE ANALYSIS
27.00 MARKETING - UNCLASSIFIED 27.01 MARKETING - SALES AND BILLING FORECASTING 27.02 MARKETING - PROMOTION AND ADVERTISING 27.03 MARKETING - DISTRIBUTOR OR TERRITORY ANALYSIS
28.00 SALES ENTERED AND BILLED - UNCLASSIFIED 28.01 SALES ENTERED AND BILLED - ORDER ENTRY AND SCHEDULING 28.02 SALES ENTERED AND BILLED - INVOICING' 28.03 SALES ENTERED AND BILLED - ACCOUNTS RECEIVABLE 28.04 SALES ENTERED AND BILLED - SALES AND BILLING ANALYSIS 28.05 SALES ENTERED AND BILLED - BACKLOG REPORTING
29.00 GENERAL BUSINESS SERVICES - UNCLASSIFIED 29.01 GENERAL BUSINESS SERVICES - RECORDS RETENTION 29.02 GENERAL BUSINESS SERVICES - FORMS MANAGEMENT 29.03 GENERAL BUSINESS SERVICES - TRANSPORTATION 29.04 GENERAL BUSINESS SERVICES - PRINTING AND REPRODUCTION
50.00 INSURANCE - UNCLASSIFIED 50.01 INSURANCE - LIFE 50.02 INSURANCE - FIRE AND CASUALTY 50.03 INSURANCE - PENSION AND WELFARE
CATEGORY E - NUMERICAL METHODS
40.00 ARITHMETIC ROUTINES - UNCLASSIFIED 40.01 ARITHMETIC ROUTINES - REAL NUMBERS 40.02 ARITHMETIC ROUTINES - COMPLEX NUMBERS 40.03 ARITHMETIC ROUTINES - DECIMAL 40.04 ARITHMETIC ROUTINES - FLOATING POINT 40.05 ARITHMETIC ROUTINES - INTEGER
41.00 ELEMENTARY FUNCTIONS - UNCLASSIFIED 41.01 ELEMENTARY FUNCTIONS - TRIGONOMETRIC 41.02 ELEMENTARY FUNCTIONS - HYPERBOLIC 41.03 ELEMENTARY FUNCTIONS - EXPONENTIAL AND LOGARITHMIC 41.04 ELEMENTARY FUNCTIONS - ROOTS AND POWERS 41.05 ELEMENTARY FUNCTIONS - GEOMETRIC
41.06 ELEMENTARY FUNCTIONS - LOGICAL AND ROUNDED
42.00 POLYNOMIALS AND SPECIAL FUNCTIONS - UNCLASSIFIED 42.01 POLYNOMIALS AND SPECIAL FUNCTIONS - EVALUATION OF POLYNOMIALS 42.02 POLYNOMIALS AND SPECIAL FUNCTIONS - ROOTS OF POLYNOMIALS 42.03 POLYNOMIALS AND SPECIAL FUNCTIONS - EVALUATION OF SPECIAL FUNCTIONS 42.04 POLYNOMIALS AND SPECIAL FUNCTIONS - SIMULT. NON-LINEAR ALGEBRAIC EQNS. 42.05 POLYNOMIALS AND SPECIAL FUNCTIONS - SIMULT. TRANSCENDENTAL EQUATIONS 42.06 POLYNOMIALS AND SPECIAL FUNCTIONS - SUMMATION OF SERIES, CONVERG, ACCELN
43.00 FUNCTIONS AND SOLNS OF DIFF EQNS - UNCLASSIFIED 43.0) FUNCTIONS AND SOLNS OF DIFF EQNS - NUMERICAL INTEGRATION 43.02 FUNCTIONS AND SOLNS OF DIFF EQNS - NUMERIC SOLN OF ORD DIFF EQNS 43.03 FUNCTIONS AND SOLNS OF DIFF EQNS - NUMERIC SOLN OF PARTIAL DIFF EQNS 43.04 FUNCTIONS AND SOLNS OF DIFF EQNS - NUMERICAL DIFFERENTIATION
44.00 INTERPOLATION AND APPROXIMATION - UNCLASSIFIED 44.0) INTERPOLATION AND APPROXIMATION - TABLE LOOK UP AND INTERPOLATION 44.02 INTERPOLATION AND APPROXIMATION - CURVE FITTING 44.03 INTERPOLATION AND APPROXIMATION - SMOOTHING 44.04 INTERPOLATION AND APPROXIMATION - OPTIMIZATION
45.00 MATRICES, VECTORS, SIMULT LINEAR EQNS - UNCLASSIFIED 45.0) MATRICES, VECTORS, SIMULT LINEAR EQNS - MATRIX OPERATIONS (SEE ALSO E4506) 45.02 MATRICES, VECTORS, SIMULT LINEAR EQNS - EIGENVALUES, EIGENVECTORS 45.03 MATRICES, VECTORS, SIMULT LINEAR EQNS - DETERMINANTS 45.04 MATRICES, VECTORS, SIMULT LINEAR EQNS - SIMULTANEOUS LINEAR EQUATIONS 45.05 MATRICES, VECTORS, SIMULT LINEAR EQNS - VECTOR ANALYSIS 45.05 MATRICES, VECTORS, SIMULT LINEAR EQNS - MATRIX INVERSION
REVIEW: "COMPUTER SYSTEM ORGANIZATION
THE B5700/B6700 SERIES
by Elliot I. ,Organick".
JUB 2-39
In recent years a change in computer design has occurred. Instead of
a separated hardware and software design, special attention is given
to a logical and thereby hardware- and software-integrated development.
In this development a number of ideas and concepts emerged. In the
B5700/B6700 Burroughs has implemented a number of these concepts re
sulting in a system more advanced and structured than the conventional
von Neumann computers. Because of this structured design algorithmic
languages like PASCAL and ALGOL 60 are preferable on these computers.
Programs in these languages are no longer penalized with more processing
time and memory requirements only because they are structured.
In this description of the B5700/B6700 the author explains this struc
tured design well. With the thorough explanation of some basic concepts
a base is laid for a more detailed and complete description of the
B5700/B6700 computers. For these concepts no distinction is made between
hardware and software, whereas the implementation can be a hardware or a
software realization.
The author explains well the difference between the static processdes
cription and the dynamic status of a process determined by the values of
variables. In fact two parts of memory have to be available and different
operations must be possible on these two memory parts. It is a matter of
implementation that these two kinds of memory use have been realized in
the same core memory.
On one hand the blockstructure in algorithmic language implies nested
structure in the processdescription. On the other hand it implies the
creation and removal of subprocesses in the status of a process. The
stackmechanism is especially suitable for the dynamic increase and
decrease of the status. Then it is easy to determine and to control the
addressing environment of each subprocess.
Until today the concept of parallel execution of processes was only used
by system programmers (for the design of mUltiprogramming systems).
Burroughs has made this concept available to the user, because the
implementation of this concept is not difficult if the stackmechanism
is available. However, the synchronization primitives for the control
of parallel processes are not easy to handle and therefore the author
has to spend a lot of energy to explain them.
In general the author has found the right way to explain the concepts
and the implementation of the concepts summarized above (sometimes
one picture can tell more than a thousand words). With these concepts
systems like the B5700/B6700 can be constructed.
It is a pity that the author does not treat the design of the operating
system. It must be possible to explain concepts like multiprogramming,
virtual store and I/O handling in terms of the concepts introduced.
For everyone who wants - out of interest or possible in connection with
daily activities - to get a good idea of systems like the B5700/B6700,
this book is of high value.
J. Schoenmakers
Eindhoven University of Technology
Computing Centre
Mailing list of JUB6700
corrections and additions.
Institutes, additions
AFMPC (DPMBDE)
Randolph AFB . c/o Frank Sutter
Texas 18148, USA
JUB 2-41
ARBED Service Methodes et Informatique c/o Norbert Rischette
Botte postale 1802
Luxembourg (Gr.-D)
Department of Central Services Information Systems Division c/o R.C. Fender
. City Hall
Jacksonville, Florida 32203, USA
Dime Savings Bank of New York c/o Frank P. De Angelo
9, De Kalb Avenue
Brooklyn, N.Y. 11201, USA
Gifford-Hill & Company, Inc. c/o R.M. Stephens
P • 0 • Box 47127
Dallas, Texas 75247, USA
HRS Data Center
P.O. Box 2016
c/o Franklin Wertman Jr.
Jacksonville, Florida 32203, USA
Infonavit c/o Arturo Cota Castro
Macedonio Alcala No 10
Mexico 20, D.F., Mexico
Information Storage & Retrieval
6204 Keybro St.
Laurel, MD. 20810, USA
c/o Dennis E. Husman
International Bank for reconstruction and development
1818 H Street, N.W.
Washington, D.C. 20433, USA
c/o Karl G. Jahr
International Monetary Fund
19th t H STS, N.W.
Washington, D.C. 20431, USA
c/o Frank A. Maranto
Lomas & Nettleton Company
201 Main Street
Houston, Texas, USA
c/o James A. May
. Nation~l State Bank, Computer Center
401 Park Avenue
Linden, New Jersey, USA
University of Delaware, Ecology Center
Utah State University
Logan, Utah, USA 84321
Utah State University, Computer Center
Logan, Utah 84321, USA
VOLVO Car Division c/o Goran Bjorkman
Technical Data Processing 56840
GOteborg, Sweden
Institutes, corrections
Police National Computer Unit,
page JOB 1-27:
c/o E. Quinney = c/o K.M. Shew.~y
Universidade do Rio Grande do Sul,
page JUB 1-28:
c/o •••• = c/o John D. Rogers
University of Illinois at Urbana
c/o Richard Boyer
c/o Robert K. Shaffer
c/o Wendell L. Pope
Champaign Center for Advanced Computation
page JUB 1-29 :.
c/o J. Meir
University of Illinois at Urbana
Champaign, Civil Engineering Systems . Laboratory c/o L.T. Boyer
JUB 2-42
Mailing list of Burroughs-representatives, additions
* Mr. Peter Ax, Institut fur Informatik,
Universitat Karlsruhe, Am Zirkel 2,
75 Karlsruhe, Deutschland.
* Mr. James Bouhana, Systems Specialist,
Burroughs Corporation, 1461 Camino del Rio, South San Diego, •
California 92108, U.S.A.
Burroughs GmbH, System Support,
Large Systems, 6 Frankfurter Allee 14-20,
6236 Eschborn/Ts., Deutschland.
* Mr. Faruk Sarc, EDP Promotion Manager,
Koc-Burroughs, Sicilli Ticaret A.S.,
Istanbul, Turkey.
* Mr. Walter Lawson, Manager Market Development - Education,
Burroughs Machines Ltd., Heathrow House, Bath Road,
Cranford Hounslow, Middlesex, England.
* Mr. Alan L. Reade, Sales manager large systems,
Burroughs Corporation, Burroughs Place, P.O. Box 418,
Detroit, Michigan 48232, U.S.A.
* Mr. Rolf Recktenwald, Battelle,
Institut Rechenzentrum, Am Ramerhof 35,
6 Frankfurt/Main, Deutschland.
* Mr. Donald C. Swanson, Manager,
Burroughs Peripheral Operating Systems Section,
14724 E. Proctor Avenue, City of Industry,
California 9)749, U.S.A.
* Mr. G.T. Tucker, Manager, L.S.S.G.,
JUB 2-43
Burroughs Machines Ltd., Dominant House, 85 Queen Victoria Street,
London, England.
* will receive 1 copy.