24
PROFESSIONAL COMPUTING THE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY SEPTEMBER 1991 mm The Distributed Relational Data Merge A.< Merge Adminisl Merge Used iiiHiii III! l:lii , Programmers Guide | u System i! Admlntotnrtor's Guide ReferenceGuide fmwm K vs I' Si! 900*? Open Desktop SCOs single package integrates all this! meres Maximise your productivity with OPEN DESKTOPS complete set of industry standard system services System ference SCO THE SANTA CRUZ OPERATION 'uiiitvJirUj OPEN DESKTOP. The Complete Graphical Operating System

COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

PROFESSIONAL

COMPUTINGTHE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY SEPTEMBER 1991

mm

The Distributed Relational Data

Merge A.<

Merge Adminisl

Merge Used

iiiHiiiIII! l:lii

, Programmer’s Guide

| u System i! Admlntotnrtor's

Guide■ Reference Guide

fmwm

K vs I' Si!

900*?

Open DesktopSCO’s single package integrates all this!meres

Maximise your productivity with OPEN DESKTOP’S complete set of industry standard system services

Systemference

SCOTHE SANTA CRUZ OPERATION

'uiiitvJirUj

OPENDESKTOP.

The Complete Graphical Operating System

Page 2: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

THEPRACTISING COMPUTER

PROFESSIONAL — POP — SCHEME

IT S AN OPEN AND SHUT CASE - CONTACT YOUR ACS

OFFICE IF YOU HAVE NOT RECEIVED AN INVITATION

TO PARTICIPATE

Page 3: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

Coexistence concerns more than living with your mainframe.LAST month we had no ‘Opening Moves’ section

because of the number of good LAN articles and the knowledge that this month’s topic of coexis­tence and portability would give open systems good

coverage.Much of the material in this issue discusses a

world already here, and the OSF white paper com­ments on some interesting aspects of the future.

When Ashley Goldsworthy, your society’s CEO, sent the letter printed below, it triggered the thought that networking has a much more general meaningDear Professor Goldsworthy United Nations Scientific and Educa­tional Organisation (UNESCO), Inter­governmental Informatics Program (IIP)

I write as the IIP RINSEAP point of contact for Australia, and Member of the Australian Natcom to UNESCO Science Network IIP representative.

I need to find out whether any members of the ACS would be willing and able to contribute to UNESCO programs.

Activities of the IIP Regional Informatics Network for South East Asia/Pacific (RINSEAP) already in train are:1. $US22,000 study of needs of UNESCO member nations in the region to connect to an electronic network. J. Hine (NZ)/A. Montgomery (RMIT)2. Meeting Delhi, India, November 15-19, 1991 to discuss technical aspects of electronic networking. (Funded by ROSTSEA)3. Workshop on Software Engineering Practice: The Asia Pacific Perspective (in conjunction with ICSE’14) May 11-15, 1992 Melbourne. (Funds being sought from UNESCO/AIDAB).4. Proposal for a database on courses, information resources and experts on informatics in the SEA/P region (Funds being sought).

I am holding much additional material which could assist readers in considering whether and how to contribute to UNESCO activities.

Could you please advise me by hardcopy (fax, mail) of the resources (for example persons, projects, etc which you or your colleagues can contribute or would wish to initiate or contribute to in furtherance of the IIP objectives.Tony Montgomery IIP RepresentativeAustralian Science Network, UNESCOTony can be contacted at Royal Melbourne Institute of Technology; GPO Box 24763/ Fax 03 662 1617, EMail: [email protected] sounds like useful networking to me.

Tony Blackmore

PROFESSIONAL

COMPUTINGCONTENTS: SEPTEMBER 1991

A LOOK AT COMPUTING IN THE 1990s: An Open Software Foundation White Paper. 2

AN INTERVIEW: with Tim Segal of Open Software Associates, winners of the Oztech ’91 excellence prize of $10,000. 8

UNIX OLTP: Not just an IT myth: A discussion of the facilities coming available for distributed transaction processing. 27

UNIX INTERNATIONAL: Distributed Computing defined in ATLAS: The distributed computing framework, one of the three key elements defined in UNIX International’s open systems architectural framework, UI-ATLAS, defines compatibility with installed systems while defining interfaces for innovative technologies. 28

DISTRIBUTED APPLICATIONS — AVAILABLE NOW ON ALL MAJOR PLATFORMS: Sun’s vision has been to provide technology that would allow file sharing among every computer on a network, regardless of make. Today, through NFS, it is possible for a user to sit at a PC, log onto drive E, which may be a Sun SPARCstation, move a file to drive F, which could be a VAX, or access a file stored on drive G., which happens to be a mainframe running IBM’s MVS/NFS. 31

PCPssst!: The PCP page. 36

REGULAR COLUMNS ACS in View 34:

f>ROFE$SiONAL

COMPUTINGOrw DESKTOP

STXST

COVER:MPA International is a national distributor for Open Desktop and other UNIX products from the Santa Cruz Operation.

Open Desktop is a complete graphical workstation environ­ment for standard 386 and 486 computers. It combines into one complete environment five basic open system services:- UNIX System V Operating System, OSF Motif Graphical User Interface (GUI), TCP/IP and NFS networking facilities, SQL relational database and concurrent MSDOS/Windows 3.0 op­erability.

The combination and complete integration of these facilities allows Open Desktop users and developers to take advantage of powerful network-distributed graphical applications.

PROFESSIONAL COMPUTING, SEPTEMBER 1991 1

Page 4: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

FROM NOW ON

An Open Software Foundation White Paper

A look at computing in the 1990s

IN THE 1990s, computing will be based on collective entities composed of specialised nodes. From the user’s point of view, those entities will be open; specialised nodes from

many vendors will interoperate. Vendors will incorporate proprietary value-added to an open systems set of trusted building blocks. In this manner, proprietary products will become custom elements within a larger, open systems environment.

The dominant influence on computing in the 1990s will be the evolution of the client/server paradigm. The X Window System(TM) technology represents a bellwether in this evolution — an evolution towards systems that will be characterised by specialised nodes, interoperating with other specialised nodes to deliver services to users.

Today the mainstay of computing is the general-purpose node, as represented by the mainframe or the personal computer. In the 1990s, such general-purpose nodes will still exist, just as single-celled creatures do. For sophisticated functionality, however, collective computing will be the mainstay; the collective will be assembled from specialised parts. Mainframes can be expected to evolve into servers while general-purpose PCs will be replaced by special-purpose media nodes that provide a window from the desktop to a worldwide information environment.

The term collective computing is used here in place of more familiar terms such as distributed

computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important aspects of computing in the 1990s.

For example, the network will likely include many collective computing environments. The idea of one worldwide computing entity is neither desirable nor feasible, although there certainly will be one worldwide network connecting numerous collective entities. The term client/server represents a two-party relationship; however, the collective is made up of many cells, each performing its own specialised function.Why Collective Computing?THE transition to collective computing will occur for the same reasons that single-celled creatures evolved to become multicelled. The capability of the more complex system is greater. Specialised nodes acting together can accomplish more with greater reliability than can one large, general-purpose entity or a colony built from identical, general-purpose cells.

There is no theoretical restriction against a large, intelligent, one-celled or colony creature. However, the ability of the specialised parts to evolve to achieve their desired functionality more successfully, while maintaining stable interfaces with the rest of the system, has proved to be the more successful design philosophy for living creatures. The benefits of such a design approach ought to apply equally well to computer systems.

The computer industry knows how to design specialised nodes that solve particular classes of problems extraordinarily well. They prove to be both expensive and inefficient for other purposes. A VLSI simulator is a very important and valuable specialised node, for instance, but it is not well suited for general transaction

Figure 1. A Mach 3 microkernel system

ApplicationCode

Transparent Shared Library

Generic UNIXServers Specific

Servers

File File File Pipe UNIXServer Server Server Server Process

Mgr

Microkernel Interface

Device Task Mem IPCDrivers Mgmt Mgmt

Microkernel

2 PROFESSIONAL COMPUTING, SEPTEMBER 1991

Page 5: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

Portability does not mean the ability to move code from any node to any other.

processing. Nodes designed to address the needs of transaction processing have comparable limitations. Similar observations also can be made about real-time nodes, specialised I/O nodes (such as those for video), and communication nodes.

With the evolution of X Window technology comes the production of specialised nodes, notably X terminals. Those X terminals will evolve to become media nodes with a special function: displaying information to the user in all of the forms appropriate to human needs.

This type of specialisation is to be expected. Just as human sensory organs are built from specialised cells, media nodes will address sound (reproduction, speech generation), vision, tactile input, and other functions. These nodes must be fairly powerful to interact with people in one or another of these modalities. The complete service delivered will be possible only as a result of the performance of the collective system.Portability, Scalability and Interoperability in Collective SystemsIF collective systems become the foundation for open systems in the 1990s, the principles that govern open systems today — portability, scalability and interoperability — will still apply to collective systems; however, their meaning will evolve.

The performance of collective systems will depend on secure, high-performance interoperability to tie together the specialised nodes. Such a level of interoperability will be greater than the general network connections among general-purpose nodes that characterise today’s distributed environments. The elements of a collective will require a secure connection if

the performance of the overall environment is to be trusted. Performance, moreover, will be critical because few problems will be solved by computations that remain within any single node.

Portability, as it refers to the ability to move code from one collective system to another, will remain an important property of these open systems. From a user’s point of view, portability for a collective system is unchanged. This does not mean, however, that software will be moveable from any node within a collective to any other. The specialised nodes of collective systems will not be identical. They will perform separate functions. As a result, not every program will run on any node (in a practical manner). Print servers will not run the same programs as VLSrsimulators, and vice versa.

As with biological systems, not all specialisation of functionality makes sense. The science of biology has a common taxonomy; a similar framework will be needed for collective computing. For example, nodes might be specialised into four families: computing, information management, input/output, and communications. Each family would have its own subcategories, serving the purpose they do in modern biology.

Scalability is another property often associated with open systems. It will continue to be important to develop scalable families of specialised nodes — for example, a scalable family of relational database nodes. However, the notion of general scalability — system software accommodating all ranges of performance across all specialised purposes — is an impractical goal. The properties of scalability

WE'VE GOT TO GET

RID OF MSThere are no two ways about it.

Through research the chances are

we will find a cure, without

research, the chances are we won't.

But research requires funding

and now we need your help. If you

can help, please do. A cure could

be only dollars away.

Silti

ifil

litaiHiHH

PROFESSIONAL COMPUTING, SEPTEMBER 1991 3

Page 6: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

The walls of a computing room no longer provide sufficient security. Forthat reason, networked systems must have the trust built in.

for compute servers measured by FLOPS (floating-point operations per second) will bear little resemblance to the properties of scalability for database servers measured by transactions per second.

Scalability will continue to be derived from the base hardware technology. In biology, all multicell creatures share many common elements of cell structure. If the performance of those elements improves, a systemic improvement can be expected. The same expectation holds for computing systems. Improvements in hardware speed, cost, and reliability will continue to be driving forces behind the practicality of their evolution. In particular, the improvement of hardware technology coupled with the ability to manufacture low-volume custom nodes as cheaply as high-volume general-purpose ones will be one of the factors that makes the transition from unitary to collective computing practical.

To summarise, nodes supplied by different vendors will interoperate. Applications will be portable across collectives but not necessarily between specialised nodes within a collective. Individual nodes of a given type will be scalable.The Structure of Specialised Nodes:The Microkernel Architecture IF collective computing comes about, the computing industry will seek an engineering discipline that allows specialised elements to be created efficiently. As with architecture for other disciplines, collective computing will require a framework to assemble these elements from a set of reusable parts. If the architecture is a good one, construction of a node using only the necessary parts should be possible. Computer engineers should be able to produce minimal designs for each node. This would simplify the resulting system and improve maintainability, trust, and economy.

Along these lines, the Open Software Foundation Research Institute has been tracking development of a new generation of microkernel operating systems in the research community. These operating systems permit specialised nodes to be constructed from a minimal set of parts. The Research Institute has been working closely with leading designers of such systems, including the Mach group at Carnegie Mellon University and the Amoeba group at the Centre for Mathematics and Computer Science and the Vrije University in the Netherlands. Figure 1 shows the elements of the Mach 3 microkernel system.

The minimal microkernel is concerned with interprocess communication (IPC), memory management, and task management. Most of the functions of the operating system arise from user-state processes structured in a client/server relationship within a particular node. There are general processes of general utility (Table 1) and specialised processes used to provide particular behaviour in a node. The specialised serving processes are used to deliver the functionality of the Unix operating system.The Role of Proprietary Systems:IN the 1990s, few standalone, non-interoperating proprietary system nodes will exist. Customers will demand interoperability. For that reason, most proprietary products will be built to serve some specialised purpose in an open environment. To decrease their software development costs, vendors likely will build such specialised nodes from source code licensed from

an industrywide supplier. They will add proprietary code to deliver the specialised benefits of their product only as necessary.

TABLE 1. Examples of generic serving processes in a microkernel system

Server Purpose

Authentication Login and process credentials, e.g, Kerberos

File Server File System, e.g, BSD, AFSName Server Root of name space,

e.g, X.500, DCENetwork Server Network protocolsDevice Server Device allocation

In this model, open and proprietary are not two conflicting categories of software. Open refers to the interoperating character of the environments. Open environments are those in which the customer has a choice of vendors to supply the separate nodes. Proprietary refers to the vendor-specific value-added that enables a product to serve a particular specialised function in a network.

From this perspective, the UNIX operating system becomes a particular kind of specialised proprietary system and not the underpinning for every node in the distributed environment. This situation is desirable because the UNIX operating system is too large and too complex, carrying with it too many kinds of functionality. Most nodes will not require all of this functionality to interoperate successfully, and indeed, in some cases, will benefit from its absence. For example, database and compute servers will best access hardware resources directly, with little interference from the normal UNIX protection and structures.

Architecturally, open systems technology suppliers such as OSF will provide a base on which these interoperating, specialised nodes (including the UNIX operating system) can be constructed efficiently by vendors. Enabling this efficient construction is the special appeal for open systems of a microkernel architecture.Trust in a Collective Environment THE division of a computing element into a set of reusable parts also introduces a need for another goal for the system: trust.

Why is trust important? The more people depend on computers, the more they will require that computers perform reliably and in a trustworthy fashion. Because distributed systems are particularly susceptible to diseases that use the system’s own means of transport to infect the entire organism, the issue of trust takes centre stage. The walls of a computing room no longer provide sufficient security. For that reason, networked systems must have the trust built in; there can be no assumption that any communication channel or even most nodes are physically secure. The most important and common use of additional cycles during the 1990s will be in the service of increased trust, reliability, and availability for collective systems.

Collective systems are subject to infection; however, they also have certain natural resistive mechanisms that can be employed. Their collective character provides some advantages germane to achieving high reliability, trust, and availability. For example, such systems are easily amenable to replication of data, computing resources, communications links, and I/O

4 PROFESSIONAL COMPUTING, SEPTEMBER 1991

Page 7: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

The Boss we all need! m

’’When you spend as much time as I do developing software applications, you've got to have the right software - and I've found it!”

DataBoss 3.5 is a truly relational database generator from Kedwell Software.

"It has one of the most powerful report generators I've ever seen.

"I simply paint any reports I need with the new WYSIWYG report generator.

"It generates "C" or Pascal source code which allows unlimited flexibility.

"DataBoss 3.5 saves any programmer valuable time because it operates a full systems development life-cycle.

"There's no run-time licence fees and no programming is required.

"Kedwell Software offers full technical support and provides a comprehensive, easy-to-use manual. They really make it simple."

If you want more information on DataBoss 3.5, please mail this coupon to the address below.

Name:___Company:Address:

Ph:

KEDWELLSOFTWARE P/L.

A.C.N. ()!() 752 799 P.O. Box 122. Brisbane Market Queensland Australia 4106 Telephone 61-07-3794551 Facsimile 61-07-379 9422

3

Page 8: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

A large integrated system has its own bottlenecks. The codeinterrelationships can be complex; the code paths difficult tomodify.

devices. Because they are physically distributed, no single physical catastrophe can destroy them.

Although providing trust for collective systems is a challenging goal, it is not unachievable. Trust must be a part of the fundamental design; otherwise, collective systems are only as strong as their weakest link. Hence, one of the most important tasks for open systems suppliers will be to provide parts that can be assembled into such trusted systems.

The OSF Research Institute has been exploring such a modular approach to trust for collective computing. Working with Trusted Information Systems (TIS) and Carnegie Mellon University under the sponsorship of DARPA (Defense Advanced Research Projects Agency), OSF is investigating the construction of a microkernel- based kit of serving processes that can support B3-level security. B3 security delivers a much higher degree of assurance that the target system delivers the claimed trust benefits.

The microkernel constitutes the common software element on each specialised node of the collective that guarantees a high level of trust. It is sufficiently small to be thoroughly analysed. The microkernel ensures that interprocess communication is properly authenticated by the use of tokens called capabilities. A distributed environment can have any node on the network; a collective is a distributed environment in which all the nodes share a common microkernel.

Trusted Information Systems has completed a prototype of a B3 architecture for an individual node. The Open Software Foundation, Carnegie Mellon University, and Trusted Information Systems are working with the NCSC (National Computer Security Centre) to put this design into formal review. As the number of incidents of computer Trojan horses and viruses increases, the marketplace will demand such systems as soon as they are ready.Performance of Microkernel System THE modular design of a microkernel system potentially is subject to a performance penalty. Message passing between serving processes and the microkernel requires context switches, slowing down the system.

The benefits of microkernel systems more than compensate for the performance cost. The benefits are the ability to design specialised elements in a collective environment readily, providing a high level of trust, minimal excess code, and simplified maintainability. In the 1990s, these benefits will be critical, more important than extracting every last cycle of performance.

The situation is analogous to one that exists today for programming languages. Programming

is no longer done in assembly language, although doing so is faster than using high-level languages. This is true for two reasons: compilers work well, and the benefits of high-level languages — ease of design, maintenance, and portability — are considerable. Similarly, the benefits of the microkernel more than outweigh the small performance penalty the microkernel may exact.

The resulting performance requires no apologies. A large integrated system has its own bottlenecks. The code interrelationships can be complex; the code paths, difficult to modify.

Here are some techniques being developed at Carnegie Mellon University to obtain good performance from the Mach 3 microkernel:• Make optimal use of memory management.

The largest CPU cost of many operations in a traditional kernel is copying data (in and out of buffers, for instance). Mach memory management’s copy-on-write provides the potential to eliminate much data movement between the kernel, block I/O devices, clients, and servers. Because copy-on-write is integrated with Mach IPC, this can be done without violating the conceptual integrity of a message- based system.

• Perform server work in client address space. Many kernel activities are effectively being moved out of the kernel directly into the client’s address space by being placed in the transparent shared library. To the extent possible, the requirements of a server are implemented in the library, thus avoiding the need for either a message or kernel trap.

• Use fast local IPC. The Math IPC can and has been optimised for local case because remote IPC is handled outside the microkernel. For example, a client can “hand off’ to a server and a server can “hand back” at the completion of a local IPC. The context switch is unavoidable, but the path through the scheduler is optimised or avoided.

• Take advantage of fine-grain parallelism for multiprocessing. The microkernel already contains the fine-grain locking needed for effective multiprocessing support and allows servers to be multithreaded.

• Use augmented servers. Developers are working on a server framework that allows several message-based servers to be combined if necessary while retaining the clear modularity and message-passing paradigm.

Fundamentally, the microkernel architecture replaces most system traps with messages (involving a context switch). To maintain performance, the options are to avoid the message by working in the client (or recombining servers), to make message passing fast, and to

The Australian Computer SocietyOffice bearers

President: Alan Underwood. Vice-presidents: Garry Trlnder, Geoff Dober. Immediate past president: John Goddard. National treasurer: Glen Heinrich. Chief executive officer: Ashley Goldsworthy.PO Box 319, Darlinghurst NSW 2010. Telephone (02) 2115855.Fax (02)281 1208.

B Peter Isaacson Publications

(Incorporated in Victoria)

PROFESSIONAL COMPUTINGEditor: Tony Blackmore. Editor-in-chief: Peter Isaacson. Advertising coordinator: Linda Kavan. Subscriptions: Jo Anne Blrtles. Director of the Publications Board: John Hughes.

Subscriptions, orders, editorial, correspondence*Professional Computing, 45-50 Porter St, Prahran, Victoria, 3181. Telephone (03)520 5555. Telex 30880. Fax (03)510 3489.

AdvertisingNational sales manager: Peter Dwyer.Professional Computing, an official publication of the Australian Computer Society Incorporated, is published by ACS/PI Publications, 45-50 Porter Street, Prahran, Victoria, 3181.

Opinions expressed by authors in Professional Computing are not necessarily those of the ACS or Peter Isaacson Publications.

While every care will be taken, the publishers cannot accept responsibility for articles and photographs submitted for publication.

The annual subscription is $50.

6 PROFESSIONAL COMPUTING, SEPTEMBER 1991

Page 9: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

improve the inherent speed of kernel services (by avoiding data movement).

Preliminary measurements of the Chorus, Mach 3, and Amoeba systems suggest that systems architected with microkernels will perform without significant penalty. Table 2 provides some preliminary measurements of the Mach 3.0 single-server system compared with the performance of the Mach 2.5 integrated kernel system. Mach 2.5 is generally comparable with commercial UNIX system performance.

Table 2. Mach 2.5 versus Mach 3.0/UNIX Serv­er as measured at CMU on a Sun 3/60 with 8

megabytes of RAM and a Priam disk.Test Mach 2.5 Mach 3.0 Mach 3.0

Speedupcompilation 28.5s 27.4s 1.0ITC benchmark 400s 405s .98paging test 73.5s 75s 0.98

The single-server implementation combines all the user space servers into one serving process in user space. Thus, it is only an approximation of

NOVEMBERFEATURE

Manufacturing systems, CAD/CAM and CIM*

There is a wide variety of products with credible suppli­er backing and users who can attest to their success. Sys­tems are available for the spectrum of manufacturers needs, but are these potential users aware of the manage­ment commitment involved and the need for strategic planning.

(MRP, MRP II, manufacturing management, shop floor sys­tems, plant and product simu­lation, consulting services, high performance graphics workstations and numerically controlled tools.)

Editorial enquiriesphone Tony Blackmore

(03) 5205565

PROFESSIONAL

COMPUTINGTHE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY

the full multiserver architecture shown in Figure 1. Although preliminary, these measurements provide some basis for optimism.ConclusionCOMPUTING is going through a transformation that, in many ways, is as important as the evolution of single-celled creatures to multicelled creatures. The transition will take time; however, the form it will take is evidenced in X Window System technology today. It is a harbinger of computing in the 1990s, which will consist of collective entities composed of specialised nodes. They typically will be built from an open systems set of trusted building blocks with some amount of vendor- specific value-added. In this way, proprietary and open systems will be merged into a cooperating set of industry contributions serving users needs.

Computing in the 1990s will consist of collective entities composed of specialised nodes.

AT LAST!

HI

■S& s

SKA'

Silica

A CORPORATE SOLUTIONS COMPUTER SHOW THAT YOU WILL APPRECIATE.

It’s the new DATA PROFESSIONALS EXPO... where you will see the latest in mid-range systems and corporate solutions.

Here’s what you can expect:• The latest mid range and

mini computer systems• Unix workstations and new

generation 386 and 486PCs• Multi-user and networking systems• Unix and Pick based software,

solutions• Vertical market products• Storage systems• Training services• Seminars

□■Inni__□□□■■■□□□□QUEENSLAND

DATAFROFESSIONALS

EXPO 1991

2nd-5th OCTOBEROpen 10am each day

RNASHOWGROUNDS

BRISBANEOpen to

business people over the age of 18.

THE BIGGEST COMPUTER EVENT IN QUEENSLAND’S HISTORY.Organised by:Queensland Exhibition Services (07) 345 8800.

PROFESSIONAL COMPUTING, SEPTEMBER 1991 7

Page 10: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

AND THE WINNER IS:

SENATOR John Button opened the envelope and an­nounced that Open Software Associates was the winner of Australia’s richest IT prize, the OZTECH’91 Excel­lence Award, for their OpenUI product.

Your editor was aware of Open Software Associates as the Phoenix which had arisen from the ashes of Hewlett Packard’s Australian Software Operation; knew that they had a product which had been shown at CeBIT Technol­ogy Fair in Germany earlier this year, and now a $ 10,000 prize. I had to learn more.

Their product, OpenUI, is a User Interface Manage­ment System. What I saw at the trade show in Canberra was a very snappy Builder which lets you create the user interface very quickly indeed. Then, without any rede­velopment effort, the same interface is run up on other GUIs and on character terminals. It is quite impressive to see this happening across the network.

There is also a presentation language to fine tune your control of the user interface behavior in areas like data validation and interaction. An elegant, message-based programming interface encourages healthy separation between user interface and application code.

And later I visited their facility in Ringwood, Victoria. Salesmen when making calls on clients have the habit of studying the visitors’ book in case there is intelligence to be gleaned. It is an old habit which informed me that OSA’s product is of regular interest to some very signifi­

cant overseas vendors; an interest which convinced me that the opportunity to interview Tim Segall was impor­tant.

Tim is a director of Open Software Associates Ltd, the staff and management buy-out of the Australian Soft­ware Operation, closed late last year by Hewlett-Packard. He completed Honors in Computing Science at the Uni­versity of Queensland, and was a committee member of the Australian Unix Users’ Group. With Hewlett-Pack­ard, he had been a quality and productivity manager, and software development manager.

As well as OSA’s OZTECH’91 win, its product OpenUI was also selected in the top 100 Australia- friendly, world- class products for the Making IT in Australia catalogue. Tim headed the en­gineering effort which brought OpenUI to market on time, keeping a schedule set in the early “exciting” days of the buy-out. Delivering on schedule, certainly under those conditions, must represent another first! It is testi­mony to the commitment of the associates.

The company has already opened its first overseas subsidiary in Europe, Open Software Associates GmbH. It is now establishing master distributors for North America.

In addition to its product sales, Open Software Asso­ciates also offers its considerable experience in develop­ment of systems and applications on a contract basis. Itsparticular expertise is in applications which run on multiple platforms and displays.

Open Software Associates can be contacted at (03) 871 1666.

The Interview

Tim Segall

EDITOR: In Professional Computing, we have had articles on ABIs, OSFs ANDF plans, and the use of XPG3-conforming languages and plat­forms. I’m interested in your views on the com­mercial realities of application portability.TS: We’re certainly moving in the direction of increasing application portability, as the comput­er industry matures. What everyone wants is for the boxes to become commodity items that you can mix and match to meet your real require­ments. To be commodities, they have to plug and play. Plug-and-play means you must have standards. We’re not entirely there yet, but there are a number of trends, such as the standards you indicate. Increasingly, we’re seeing compliance with such standards. For instance, most of the vendors either have now, or will have very short­ly, POSIX-compliant operating system inter­faces, even on what have traditionally been pro­prietary operating systems. VMS is a good example. It’s a matter of waiting for standards to emerge, and be implemented on a critical mass of systems.

While we are waiting, are there any shortcuts to portability of applications?

Not really. There are at least four different elements of an application that require standar­disation for portability. I’m thinking of the data­base, the networking, the user interface, and what I call the logic proper — the logic of your busi­ness rules, or the problem you are trying to solve. This isn’t a universal view: some applications need other special services; and many people are currently ignoring the networking component, for instance; but it’s a good working model.

What’s the basic strategy for achieving portabil­ity of this four-part application?

Ideally, identify a well-defined, consistent, broad programming interface (or API) for each of the areas you want to deal with, and program to it. If you can’t manage that, the next best thing to do is write yourself such an API, and put the services you need underneath that. The goal here is to give yourself some layer of insulation, some element of future-proofing.4GLs are commonly promoted as an answer to portability. Few if any, would satisfy your goal of an API across your four elements.

Well, you may not be exposed to the API in a closed system. 4GLs are a solution for some situ­ations, but they can mean problems of coexis­tence of the 4GL applications with off-the-shelf packages, in terms of data sharing, etc.

We’re getting to the stage now where the porta­bility problems are nearly solved, independently of any particular environment. So if you choose a standard DBMS interface, then you are left won­dering whether the 4GL offers the best solution for the application logic requirements, now that the portability has been solved. You may find that one of the established languages is more appropriate to the task at hand: if it’s commer­cially oriented you might choose COBOL; if its scientific maybe FORTRAN; process control maybe C. By writing to standard APIs, you could choose the best language for the task. Which might still be a 4GL, but on its own merits, like speed of development or relationship to a CASE tool — not because it resolves a different issue of portability.So APIs are the key for flexibility?

So far. Most of the currently emerging stan­dards, of course, are source-level interfaces, not binary level interfaces. You mentioned the inter-

8 PROFESSIONAL COMPUTING, SEPTEMBER 1991

Page 11: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

Historically in the computer industry, portability has been very poor. Today you can achieve it with some effort.

esting work emerging, in particular from, the Unix community, in binary level interfaces: the multiple Application Binary Interfaces from Unix International, and the more challenging Ar­chitecture Neutral Distribution Format from OSE I think these are key things that will in future change the way we think about portability.

So where are we now as regards portability of applications?

Historically in the computer industry portabil­ity has been very poor. Today you can achieve it with some effort. Hopefully, in five years’ time, it will just come naturally. What’s making the dif­ference (and taking the time as well!) is firstly, defining the standards and achieving internation­al agreement, and secondly, implementing them on enough platforms to give a critical mass.

GOSIP and OSI demonstrate this. Almost ev­erybody of note has declared this to be their future direction, but there still aren’t enough real implementations on enough machines to make true heterogenous networks a plug-and-play real­ity. But that’s clearly the future of networking.

I already mentioned POSIX and X/OPEN standards and the impact they are having on operating system APIs. And there’s some very interesting work being done in databases. Late last year, a number of vendors demonstrated talking across a network using a standard API for Remote Database Access. That looks like it will turn into a major standard in that arena, from the number of vendors signed up for it (with a couple of notable exceptions: it’s interesting to see now that some of the independent software vendors are falling into the trap of trying to protect their proprietary systems). There was some very interesting work being done at the University of Queensland in the remote database access area.

You’ve mentioned three of the four elements of your model: what about user interfaces? The so- called “look and feel consistency” of GUIs is a feature of many vendor sales presentations. Is this an answer?

No! Don’t turn to GUIs to solve your portabil­ity woes! That consistency is certainly important, but it’s a benefit from the end-user’s viewpoint. It’s not so simple for the programmer trying to achieve portability.

There are a few major GUI players, and every few years new technology brings another player into the market. So now we have Windows and Presentation Managers for PCs; Motif and Open Look from the Unix camps; and Apple with their fairly constant 10 per cent share. And there are new ones coming down the track, like NeXt; and new technologies like multi-media which may force a re-thinking of the current user interface models.

The point is that there is no standard in this area, and unlikely to be one. Secondly, the APIs of most of these GUIs are not the sort of thing that most programmers should tackle lightly.

That’s a significant issue. It is the convenience to the users that is promoted. We are asked to take for granted that the developers will find it equally easy.

True, but it isn’t easy. That is not to deny the pressure from users to get their favorite GUI feel on all their corporate applications, not the bot­tom-line economics. A GUI environment means there is some cross-learning between applica­tions: Apple research has proven that. Now, if an

environment choice like this can save a mere $200 in training costs per user per year, or can give a 54 per cent improvement in work accuracy and increased productivity, then that’s worth millions to a large organisation.

But for programmers, it’s a long learning curve to become expert in a GUI. The APIs here are low level, complex, and prolific — not even well thought-out, in some cases. Programming GUIs is a productivity drain. Really, your enterprise is delivering thunks (or whatever it is you do), and your MIS department is there for custom code in support of thunk-supply. You don’t want to sink productivity into GUI expertise any more than you want to get into the bowels of database man­agement.

You can afford to get into GUI programming only if you are very large, and if you must. When you’ve been through the exercise a couple of times, you realise the complexity in the underly­ing toolkits, and how long it takes to become professionally productive with a particular tool­kit (which is learning the enterprise potentially loses if someone moves on). So you need produc­tivity tools to help you. In any case, writing to these low-level interfaces is likely to give a poor result in terms of maintainability; you are much better off to describe your interface at a higher level.

So what do you suggest? Pick a GUI and declare a company standard?

Well, that’s not a very practical choice unless you have very deep pockets indeed. First off, GUIs tend to be associated with particular hard­ware; Windows with PCs, the Apple, X-based systems on workstations. If you are a decent­sized organisation, you’re likely to have inherited a mixture of hardware. Even if you haven’t, you want to maintain some measure of independence in your future purchases. You are planning for application portability; why would you let a par­ticular GUI tie you back onto a hardware plat­form again? Of course, with extra-deep pockets, you can re-equip as often as necessary.

But here’s another critical aspect. You need to stick to your knitting — the supply of thunks. That means you want to buy as much off-the- shelf software as possible. Now, different GUIs have different application specialisations. For in­stance, if you do some desktop publishing, Apple has the lead in applications, so you might find a Mac in the marketing department. PCs dominate the low-end tools, so you’ll find word-processing and spreadsheets on Windows boxes. If you get into CAD-CAM, many of the best applications are on workstations. You may or may not have that spread now, but you don’t want to restrict your choice of off-the-shelf solutions. And you also want to keep your options open as you move into client-server applications, where you may want to move the user interface to a different box entirely.Not to mention the matter of future-proofing your choice of GUI.Exactly. Two issues: coping with many GUIs now, and coping with a future GUI you don’t know about. Both of those are well addressed by choosing a tool which makes a high-level abstrac­tion of a user interface, and letting the tool or­chestrate the GUIs of your choosing — present and future — beneath that level. It can also simplify your maintenance issues.

GUI applications seem much more complex to maintain.

That’s absolutely true. The only thing you can

PROFESSIONAL COMPUTING, SEPTEMBER 1991 9

Page 12: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

If you have a large investment in terminals, you can’t afford to throw that away.

do about that is to apply some learning we have from the database world. That is, to move the management of the user interface into a separate component of the application. So, just as your dialogue with the disk files is managed by a DBMS, your dialogue with the user is managed with a User Interface Management System.

Then, in the same way that you wouldn’t dream of coding where on the disk this piece of data must be stored (leaving it to the DBMS), you do exactly the same with the UIMS. You don’t say: “Place this data at a particular location on the screen”, you say: “Here’s the data for the customer-name field”, and let the UIMS handle placement according to your design.

You can then maintain the application code proper, and the user interface, independently of each other. That’s particularly valuable when you realise how many requests for change are about the user interface (not surprising, since most users never understand what happens in the application logic). You can change the user interface without having to change application code.

Given some high-level tools then, do you see that everyone will eventually re-equip for GUIs?

That depends. If you have a large investment in terminals, you can’t afford to throw that away; but you also want your terminal users to have access to new specialist applications. So either you have the unattractive option of making your terminals a special case, or you put the decision in the too-hard basket; or you choose a tool which treats terminals as just another peer among the GUIs. Of course, that means a tool which provides a proper window manager for the terminal.

I think we’ve reached the area where Open Soft­ware Associates’ products are relevant.

Yes. OpenUI (open-you-eye) is a UIMS that does address all the issues we’ve discussed. Un­fortunately, there are a number of products which claim to be a UIMS, which fall into vari­ous classes. Often they fail to utilise the underly­ing toolkit properly, or at all — that is, they provide emulations instead. Many of them fall far short of being a true UIMS, and only provide a slightly higher level toolkit. And most of them ignore the terminal as a problem.

So your product OpenUI drives terminals as well as GUIs?

That’s the essence of it. Develop once, run across GUIs and still use your character termi­nals for as long as you want. OpenUI provides a full window manager for terminals, and takes care of all the geometry changes for you, and so on.

By using a high-level tool, don’t developers cut themselves off from the low-level power of the GUI?

The key is to choose a tool that gives you access to the underlying toolkit in some manner. Now, you don’t want to use that most of the time, but you may want to use it sometimes. And when you need it, you need it badly. OpenUI is an open product, so we provide that pass through.

So does OpenUI remove the need to choose a GUI?

Well, it depends exactly what you mean by the

question. OpenUI protects you from the conse­quences of your choice, since it will drive your chosen GUI or GUIs for you, without you hav­ing to change any code if you change your GUI. But be clear that it does not replace the underly­ing GUI: OpenUI drives the GUI. There are other products which emulate various GUIs. The trouble is, they’re just that little bit different, so that they can be disconcerting and reduce the end-user transparency benefit. Or they don’t co­exist with other applications, for instance not supporting the Clipboard in Windows 3.

Or when the next version of a GUI comes along, you don’t get the benefit of that upgrade, at least for a while.

Talking of co-existence, does OpenUI introduce new problems?

No, exactly because OpenUI drives the native GUI toolkit in each case. Co-existence is a con­cern mostly for the users (who, after all, are the audience for your development). As we move into the 1990s, users are multi-skilling. Now it makes sense to give them one device on the desktop to perform multiple tasks, driving multi­ple applications. These applications should be able to interact and play together. So if they are all written in MS Windows, you can cut and paste from your inventory control to a spread­sheet, and so on — they’ll all feel like part of a cohesive whole. You get the benefit of transfer of learning across all your applications.

Now there’s one other aspect to co-existence, that deals with the nightmare that haunts larger MIS shops. You may have boxes from different vendors, and you go through all the pain of stage one: getting the boxes networked together. Then you find that you really haven’t got any intero­perability, because a critical application can’t play on your screen, since it comes from a differ­ent platform. In contrast, working with a product like OpenUI provides you with a degree of appli­cation independence from the user interface. So your single application, somewhere on the net­work, can talk to different GUIs on different platforms, without you having to do any redevel­opment work. That’s really useful co-existence from the MIS viewpoint.

Are you suggesting we are inevitably moving to a client-server structure for all applications?

That’s a subtle question to answer. It’s proba­bly a stage we’ll go through. In any case, a tool like OpenUI can protect you from that issue, too. Longer term, the world may move much more to a peer-to-peer structure as hardware becomes cheaper. In other words, desktop boxes may be powerful enough to do all processing locally, but still need to share results, synchronise data and so on. So the term ‘client-server’ may die away, but I think networked applications, in some form, are here to stay. That’s a key facet of the portability and co-existence issue.

So, to sum up?If you’re not careful, GUIs are an enormous

drain on programming productivity, and a trap if you lock your applications into a particular tech­nology. For productivity, as well as portability and co-existence, developers need to consider higher-level tools that work to standard APIs. That’s a general principle for portability in all parts of application development, and it’s equal­ly true about the user interface.

10 PROFESSIONAL COMPUTING, SEPTEMBER 1991

Page 13: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

DISTRIBUTING OLTP

The benefits of Unix- based OLTP lie in distributed transaction processing environments that can integrate with mainframe data.

UNIX OLTP: Not just an IT myth

By Peter Hind

A PERUSAL of computer magazines over the last few months reveals regular use of the term “Unix on-line transac­tion processing” (OLTP) to describe every­

thing from multiprocessing hardware to data­base. Coupled with the usage of this term is the implied message that the mainframe is dead, and that businesses such as banks, air­lines and insurance companies can begin to replace their corporate mainframe systems with new Unix host computers.

This is clearly unrealistic. Mainframe trans­action processing systems have evolved over 20 years, while commercial Unix systems are still in their infancy. It would be naive to expect the transaction processing systems within Unix to rival, at this stage, the mature, feature-rich environments which have devel­oped on the proprietary mainframe systems.

Unfortunately, through the over-use and misuse of Unix-based OLTP as a catch- phrase, its real benefits are often ignored. These benefits lie in distributed transaction processing environments that can integrate with mainframe data.

Because of new developments and fierce competition in the mid-range Unix market­place, the price/performance available has ex­panded dramatically over the last few years. It is now cost-effective to provide separate de­partmental, application, or regional computers instead of expanding a central mainframe.

However, users familiar with mainframe re­dundancy, resiliency and performance tuning facilities, find these lacking in the new Unix environment. Furthermore, as the number of departmental computers in organisations ex­pands, it becomes a real challenge to avoid “islands of technology”, where separate com­puters can have different values against the same information.

The X/Open model for OLTP seeks to ad­dress both these concerns. It has defined a set of standard protocols and conventions that allow for distributed transaction processing. The X/Open model currently comprises three components: applications (clients and associ­ated servers), a transaction manager and re­source managers.

The transaction manager oversees applica­tion clients and servers, modules which can both be executed on any system in a network. It is this flexibility that makes reliable distrib­uted processing a reality in the 1990s.

The specific location of these modules is determined by a systems administrator, and managed by the transaction manager. In ef­

fect, the transaction manager becomes the brain of the OLTP system, responsible for routing client’s requests to the appropriate server, ensuring data integrity via a well-tested two-phase commit protocol, optimising the performance of the system through a set of local balancing and resiliency tools (so provid­ing a new level of mainframe resiliency and functionality to mid-range systems) and man­aging the locking of accessed information.

Resource managers, usually a database, pro­vide requested data. Transaction managers and applications work with a resource manag­er by requesting them to perform services on their behalf, without having to know how they are performed. As long as all elements support the same service interface, users should be able to plug in new components as their needs dictate.

X/Open has circulated a formal proposal for a new interface — the XA interface — to database vendors for comment. It has already been adopted by Oracle and Informix, which both expect to have XA-compliant databases by early 1992.

The XA interface will introduce a complete­ly new concept in open systems, because it enables applications to work with several da­tabases. Suddenly, users will have freedom in both their hardware choice and their database selection. It will become feasible to integrate different vendors’ database in a distributed network.

Such networks of heterogeneous database are the future of OLTE By allowing transac­tions to be executed across a network, users can give existing applications additional ca­pacity on established mainframes. They can also leverage their investment in these estab­lished systems, giving them extra functionality through integration with departmental Unix servers.

This open flexibility has been adopted by Unisys as a key component of its Unisys Ar­chitecture, the Integrated Information Envi­ronment (HE). Designed specifically to inte­grate Unisys and other vendors’ mainframe hubs, Unix servers and workstations in a co­operative processing network, the HE uses the X/Open model for distributed transaction processing as its base.

The X/Open model incorporates two con­cepts to achieve distributed transactions. First, it distinguishes between local transac­tions and global transactions. A local transac­tion is a set of operations that a resource man­ager (ie database) executes as a local unit of work under its own control. A global transac­tion is a set of operations that is under the control of transaction managers and that in­cludes local transactions.

Second, the X/Open model assumes that the transaction managers and resource manag-

PROFESSIONAL COMPUTING, SEPTEMBER 1991 27

Page 14: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

DISTRIBUTING OLTP DEFINITION

ers in a OLTP system use a two-phase commit protocol. This protocol introduces an extra step into the process of committing a transac­tion.

A two-phase commit allows the transaction manager to tell all involved systems to prepare to commit their parts of the transaction. If all the involved systems respond to the “prepare to commit’ message by saying they are ready, the transaction manager then issues a commit command. Otherwise, it is aborted.

The two-phase commit ensures a consistent view of the same data residing

Unix International: Dis­tributed Computing de­fined in ATLAS

on differentmachines.

THE information industry hasentered the fourth wave of

Take the example where the client on machine B sends out a transaction that not only updates his own resource manager, but that of machines A and C. The two-phase commit protocol forces the transaction managers on machines A and C to acknowledge the request before all the databases in the network are updated simultaneously.

This process ensures a consistent view of the same data residing on different machines, providing the mechanism to bridge “islands of technology”.

It is important to note that the resource managers in this scenario could reside on mainframe computers, Unix servers or work­stations. Furthermore, the above scenario could relate to three separate databases such as Oracle, Informix, and Unify. It is expected that within the next six months, Informix and Oracle should be able to work together in a heterogeneous database network.

Unisys plans to enable its 2200 Series and A Series mainframe database to operate in such a network within the next 12 months, as part of the Unisys Architecture strategy to integrate mainframe products.

The distribution of functions to the depart­mental or regional levels is advantageous from both the functional and technical viewpoint. The X/Open model for distributed transaction processing allows for existing centralised ap­plications to be downsized to the department or desktop while still maintaining a strategic role for the mainframe.

This translates into a cost-effective evolu­tionary solution that meets the requirements of both the users that need technology that helps them keep their competitive edge, and MIS people that need control of the informa­tion network to maintain data integrity and provide adequate response times for mission- critical data.

Unix OLTP articulated as a distributed transaction system along these lines, rather than as a mainframe replacement program, promises to be one of the great truisms of the 1900s, rather than one of its myths.

Author: Peter Hind is Unix systems market­ing manager, Unisys Australia Ltd.

computing. The first wave began in the 1960s with batch processing on large computing platforms. It began to mature with the introduction of timesharing (multiple users sharing a single processor) in the 1970s, known as the second wave, and eventually to standalone personal computing in the 1980s, referred to as the third wave. Today the diversity of computing platforms requires tighter integration and distributed networking for transparency in applications and data.

UNIX International has succeeded in defining the industry’s most comprehensive distributed computing framework available to address end-user

needs. The distributed computing framework, one of the three key elements defined in UNIX International’s open systems architectural framework (known as UI- ATLAS), defines compatibility with installed 'systems while defining interfaces for innovative technologies. It offers a clear path for transparent evolution from multiple proprietary platforms, as well as the System V installed base to move forward towards the next generation of computing (i.e., distributed and shared applications running transparently over a network.)

UNIX International, in order to meet the building market demand to integrate disparate computing systems, executed a two- phase approach in defining the UI-ATLAS distributed computing framework. The first phase consisted of thoroughly understanding the business issues driving computing

28 PROFESSIONAL COMPUTING, SEPTEMBER 1991

Page 15: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

By adhering to standards, developers would lower their investment profile andwould be able to capitalise on outside technologies.

►Gary Jackson, chairman, Unix International Australian Marketing Group

decisions at software developer and end-user sites, technology trends and directions, and the future computing requirements of developers and end-users.

The results of extensive interviews, discussions within UI’s membership community, use of business consultants, and research yielded four key requirements:

■ Ease of use for customers,■ Standards based environments,■ Protecting current investments,and■ Broader access to and integration of

information.These requirements gave a clear picture of

the larger, more macro trends developing in the world business community. Computing was clearly spreading to less and less sophisticated users in business. As such, it was becoming mandatory that enhancing the ease of use and making the system less threatening to a broader segment of the workplace population was the key for product development.

By adhering to standards, developers would lower their investment profile and would be able to capitalise on outside technologies to fulfil part of the product requirements. End- users would benefit because they would be able to better “mix and match” systems to enjoy the most attractive pricing available from an increasingly competitive supplier environment.

The ability to incorporate distributed computing without wholesale retooling of the capital base of a company was another key requirement. A distributed computing framework would have to provide the benefits of distributed computing by building on and evolving from an environment consisting of people, training, capital and software.

And finally, secure and transparent “data highways” would be needed to provide the mechanism for widespread deployment of distributed computing. Just as the freight trucking industry blossomed after the building of the Autobahn in Germany, so would an

industry dedicated to creating, moving, managing, and reporting readily available corporate data anywhere in the world from anywhere in the world. This notion of instant access of up-to-date corporate data lays claim to the promise of providing the fuel for the next major change in how businesses operate in the modern world.

In September of 1990, phase two was launched when UNIX International member companies formed a taskforce. The mission of this task force was to define an architectural framework that would address the needs outlined in phase one to extend the current capabilities of existing networking technologies, while providing an open and forward-looking development framework to support the introduction of advanced distributed computing technologies.

As a result, the taskforce defined an overall architectural framework that addressed each of the key requirements noted in phase one.

The criteria for the UI-ATLAS framework were clearly stated:

■ Must have a lifetime greater than 10 years;

■ Must be based on existing technology;■ Must be scalable to larger networks and

enterprises;■ Must be portable over a wide variety of

architectures;■ Must provide compatibility;■ Must be modular, and■ Must be easy to apply and use.It was clear from the outset that a

comprehensive distributed computing framework must have a long life in the order of 10 years or more and, hence, must be open to accommodating the inevitable advancements and innovations that will occur over the upcoming decade, i.e. compatibility.

Furthermore, the architecture must provide interoperability with a variety of disparate systems without requiring extensive changes to those systems. “Non-invasive interoperability” will bea key factor in achieving rapid adoption and widespread deployment of any distributed computing environment.

Protection of investment, always a critical concern of any ongoing business, is afforded by ensuring heterogeneous interoperability. However, systems containing a variety of disparate computers, whether from different manufacturers or different product lines from one manufacturer, require a new technology paradigm for application development.

Object-oriented computing is such a paradigm. It enhances the ease with which distributed computing applications can be developed and is critical to achieving transparent information flow between heterogeneous computer system environments.

Programmatic interfaces utilising distributed object-oriented technology significantly eases the relatively difficult task of distributed application development by “hiding” the differences of the underlying systems through use of a higher level, machine independent interface.

With well defined object-oriented programming interfaces, UI-ATLAS has the

PROFESSIONAL COMPUTING, SEPTEMBER 1991 29

Page 16: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

Non-invasive interoperability is a keyelement for thesuccessfuldeploymentof any distributedcomputingframework.

ability to provide a new look to existing applications and facilitate rapid development of new applications which run transparently over heterogeneous systems.

To meet the desire for ease of use, UI- ATLAS provides desktop metaphors as the vehicle for user interaction into the enterprise. Beyond just user interfaces, the taskforce also addressed the needs of the application developers and systems management function which greatly simplifies installing and administering a distributed computing environment by offering the end-user those necessary tools to manage a widely dispersed and complex heterogeneous environment.

In the area of standards, UI-ATLAS incorporates standards such as ISO models and the industry recognised X/Open and POSIX standards. Transition from current products to the OSI-compliant products will be provided (e.g., TCP/IP-to-OSI).

By enhancing the ability for corporate data and information to flow securely across a distributed computing network, a broader access and use of that information is a likely and desirable consequence.

Here is an outline of a UI-ATLAS distributed computing framework:

■ A complete distributed applications and services framework;

■ A state-of-the-art solution that leverages existing interfaces, and

■ A framework that integratesheterogeneous computing without disrupting current applications, communications, and/or data formats.

Non-invasive interoperability is a key element for the successful deployment of any distributed computing framework. By choosing to interoperate with existing network and application interface facilities, the additional cost and delay of adding new capabilities to existing systems before interoperability can be achieved is completely avoided. This new framework defines interoperability of networked services, such as IBM’s System Applications Architecture (SAA) and Digital Equipment Corporation’s Network Applications Systems (NAS), as well as various ISO standards and protocols that provide interoperability with systems from UNIX International members. The UI- ATLAS framework also encompasses the Open Software Foundation’s Distributed Computing Environment (DCE).

This new framework was designed to address three key audiences:

■ End Users;■ Distributed Application Developers, and■ System Integrators.It will be built upon technologies available

today while paving the way towards the next generation of computing, which is a truly transparent distributed enterprise.

UNIX International succeeded in its task of defining an architectural framework. Today, UNIX International olfers the industry’s most comprehensive distributed computing framework available. It addresses the full computing needs of distributed computing application developers, system

administrators, and end-users and provides extensive facilities for both system management and transaction management.

Yet the UI-ATLAS framework provides compatibility with installed customer bases and investments while providing transparent evolution from today’s System V technology.

As this framework evolves, it will be controlled by the industry as a whole, not directed by any single vendor; an important consideration for any potential user of a distributed computing framework because it provides users with vendor independent flexibility. In addition, this framework provides interfaces to develop the next generation of applications that transparently run across heterogeneous environments.

Services offered as part of UI-ATLAS include:

■ System Management Services;■ Transaction Services;■ User Interface Services;■ Object Management Services;■ Time Services;■ Naming Services;■ Distributed File Services;■ Data Storage Services;■ Security Services, and■ Support for Cluster Processing and

Client/Server Computing, which includes Remote Procedure Calls (RPC) and support for transparent remote execution.

Interfaces for these services are carefully being chosen to enhance developer productivity and to significantly reduce the cost of application development and portability.

This architecture and framework has been reviewed by a significant number of major computer hardware and software suppliers and has been fully endorsed by the UNIX International Company membership. Given the “up-front buy-in” and extensive facilities of this new framework, it is apparent it will establish itself as the industry standard for distributed computing applications as it builds on existing technologies and becomes fully available over the next two to three years.

In summary, the UNIX International UI- ATLAS framework is a truly complete Information Technology model based on open systems and industry standards. This framework defines the software environment necessary to support the business needs of the 1990s.

The UI-ATLAS framework enables the •deployment of cost-effective computing without disrupting current computing environments, while offering a rich set of features to take advantage of improved system and application services for diverse organisations. UI-ATLAS also provides the interfaces needed to develop new applications which will run over a diverse set of systems providing a new way in which to utilise existing information technology investments.

The UI-ATLAS framework is based on existing interfaces and can begin deployment not only to solve business needs, but to provide the competitive advantage for today’s ever-changing business climate.

30 PROFESSIONAL COMPUTING, SEPTEMBER 1991

Page 17: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

MORE THAN A NETWORK

ijsjsips f\I

Anthony West is Director of Marketing &Corporate Strategy, Sun Microsystems Australia

Distributed applications now on major platformsBy Anthony West

SUN’S vision for distributed computing is summed up by its slogan, “The Network is the Computer”. Back in 1984, when “networking” at best meant the transfer

of data between similar systems, Sun’s vision was to provide technology that would allow file sharing among every computer on the net­work, regardless of make.

Sun’s distributed file system, NFS (Network File System), has enabled that vision to largely be realised.

Today, it is possible for a user to sit at his or her PC, log onto drive E, which may be a Sun SPARCstation, move a file to drive F which could be a VAX, or access a file stored on drive G, which happens to be a mainframe running IBM’s MVS/NFS. To the user and the PC applications, each foreign machine acts like a local disk drive.

In 1991, Sun’s vision expanded. We are now looking beyond data access to an era where applications and data are distributed through­out the organisation, so that computing re­sources are used most effectively — where they are needed. Where there is less depen­dence on centralised MIS systems, and where the user has greater ability to access network resources without having to understand where they are located. Different parts of a single application can run on multiple computers, each performing tasks for which it is appropri­ate.

Examples of the sort of distributed comput­ing environment Sun users have already im­plemented range from the useful but mundane (allowing the user to tell his computer to find a time and place that would be suitable for a staff meeting), to mission-critical (allowing a stock exchange trader to receive multiple feeds from financial markets, or to provide instant analysis on the basis of which time-sensitive buying or selling decisions are made).

The benefits of “the network is the comput­er” are inherently obvious to users. Data pro­cessing professionals, however, realise there are numerous problems which need to be an­swered. It may sound ideal to place informa­tion and computing power where it is needed, but how can the data be kept consistent across the network? How can it be kept secure? What happens when the network fails? And how can we afford, or even find enough network- knowledgeable programmers and network managers to write and maintain distributed applications? And finally, why should a com­pany risk investing in distributed systems be­fore the standards for this technology are es­tablished?

Sun’s answers begin with ONC (Open Net­work Computing); the distributed computing platform on which NFS is built. ONC pro­

vides the building blocks needed to build dis­tributed applications in heterogeneous net­works. It is based on a remote procedure call (RPC) protocol which is simply a program­ming instruction that executes a procedure on a remote system. Therefore, programmers can write standard procedural programs; the main difference between an application that is dis­tributed and one that is not, is that the distrib­uted application executes some procedures on remote machines.

The RPC paradigm is the most widely ac­cepted for distributed applications in hetero­geneous environments. The ONC RPC over­comes differences in computer architectures (which affect the way data is stored) through the external Data Representation (XDR) fa­cility, which provides an architecture-inde­pendent method of representation built on the RPC/XDR base.

Apart from NFS, ONC services include net­work information services (NIS), which is a database management system for storing sys­tem information such as host names, network addresses, user names and networks; Lock Manager, which allows users to coordinate and control access to information by support­ing file and record locking across a network; REX, for remote execution of applications lo­cated on remote systems; and various NFS- related services.

The ONC RPC/XDR protocol is widely in­stalled. Of the 1.3 million systems running NFS, from PCs to mainframes, about one mil­lion also include the RPC libraries and are therefore able to act as clients or servers to distributed applications.

Most NFS source licensees support the RPC programming interface. For example, the ONC RPC protocols are supported by IBM on all its operating systems except OS/400. AT&T UNIX System Laboratories supports RPC/XDR as a standard part of System V Release 4. And the major PC local area net­work vendors, Novell, 3Com and Banyan, have all announced support for ONC RPC technology; no other RPC technology has any­where near the wide support or size of in­stalled base.

Although the RPC concept is intuitive, de­velopers have nonetheless shied away from the extra level of programming required to handle the communications between the client and server parts of the application. The avail­ability of a high-level code generator, Netwise Inc’s RPC TOOL, now addresses this prob­lem. The RPC TOOL has also been enhanced to support ONC which allows the programmer to design and write application code in the traditional way.

Standard language declarations are used to describe the relationships that exist between a client machine and its server counterpart, RPC TOOL generates 100 per cent of the RPC code required to build client and server appli­cations and produces the enhanced transport- independent RPC (TI RPC) supported in UNIX System V Release 4. SVR4 is fully compatible with the existing RPC but includes

PROFESSIONAL COMPUTING, SEPTEMBER 1991 31

Page 18: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

KZ1A iz m jjgcjiMEMORY EXPANSION

PRICES AS AT AUGUST 15, 1991

SIMM & SIP CO-PROCESSORS256Kx9 80ns $20.00 RECOMMENDATIONS1 MBx9 100ns $56.00 8087-1 INTEL1 MBx9 80ns $62.00 80287-12 AMD4MBx9 80ns $265.00 80287- INTEL1 MBx8 100ns $51.00 80387SX20 IT or CYRIX1 MBx8 80ns $58.00 80387IX33 NT or CYRIXDRAM 80287XLT INTEL

1 MBx4 (STATIC) $36.00 TOSHIBA LAPTOP1 MBx1 80ns $6.50 T1000SE SE 2Mb414256 100ns $6.20 T1800 2Mb414256 80ns $6.30 T3100E 2Mb41256 100ns $2.00 T3100SX 2Mb41256 80ns $2.20 T3200 3Mb41464 100ns $2.50 T5100 2Mb4164 100ns $2.35 T5200 2Mb

$130.00$130.00$150.00$188.00$300.00$160.00

$295.00$190.00$190.00$190.00$250.00$190.00$190.00

EXPANSION CARDSLCS 866IN (AT) EMS 512-32MB (SIMMS)OKBOCARAM AT PLUS OKBOCARAM XT OKSUN SLC 4MB (1MB x 36)

COMPAQ386-20, 386-25, 386-20S 386-S, 386-33 4MB MODULES 4MB BOARDS

$242.00

$250.00

$170.00

$320

$380.00$420.00

Sales Tax 20%. Overnight delivery. Credit cards welcome1st Floor, 100 Yarrara Road, P.O. Box 382 Pennant Hills 2120 PELHAM Tel (02) 980 6988 Fax (02) 980 6991

HARDWARE

BUY, SELL, RENT, ALL BRANDS.

“COMPUTERS ARE CHEAPER THE SECOND

TIME AROUND”CALL US NOW (02) 949 7144

% COMPUTER RESELLERS ts%Y///r INTERNATIONAL

Unit 25, Balgowlah Business Parit, 28 Roseberry Street, Balgowlah NSW 2093 FAX (02) 9494419

THE KEY TO COMPETITIVE ADVANTAGE...

Z7IS HELD BY YOUR EMPLOYEES!The second edition of VOCATION: Human Resources in

Australia and New Zealand will help you unlock your most valuable resource — your employees.

TOPICS SUCH AS:• “Reaping full benefits of high-tech computer technology”• “Tfeam effort a must to provide good customer service”• “Faster way to end company disputes” MANY Mu• “Contract professionals boost profits” ^NDOrder your copy of Vocation now and unleash your workforce today!

...at $29.25 incl postage & packagingpost the'COUPON BELOW TO: (no stamp required)

Tina AngeramiMarketing Services Department Peter Isaacson Publications Pty Ltd Freepost 1 PO Box 172 Prahran VIC 3181or fax: 0-3-521 3647 Attn: Tina Angerami or phone: 008-335 196 toll-free

OPEN SYSTEMSMacpherson Open Systems — Open System Specialists Unix, Pick, Project Development

Expertise available inProgress and System BuilderSydney (02) 416 2788 Melbourne (03) 866 1177Greg MacPherson Ashley Isserow

PROFESSIONAL

COMPUTINGTHE MAGAZINE OF THE AUSTRALIAN COMPUTER SOCIETY

ADVERTISINGENQUIRIES

Phone Peter Dwyer

(03) 520 5555

Advertising conditionsAdvertising accepted for publication in Professional Com­puting is subject to the conditions set out in their rate cards, and the rules applicable to advertising laid down from time by the Media Council of Australia. Every adver­tisement is subject to the publisher's approval. No respon­sibility is taken for any loss due to the failure of an adver­tisement to appear according to instructions.The positioning or placing of an advertisement within the accepted classifications is at the discretion of Professional Computing except where specially instructed and agreed upon by the publisher.Rates are based on the understanding that the monetary level ordered is used within the period of the order. Maxi­mum period of any order is one year. Should an advertiser

•fail to use the total monetary level ordered, the rate will be amended to coincide with the amount of space used. The word ''advertisement” will be used on copy which in the opinion of the publisher, resembles editorial matter.The above information is subject to change, without notifi­cation, at the discretion of the publisher.

Warranty andindemnity

ADVERTISERS and/or advertising agencies upon and by lodging material with the publisher for publication or auth­

orising or approving of the publication of any material, INDEMNIFY the publisher, its servants and agents against all liability claims or proceedings whatsoever arising from the publication, and without limiting the generality of the foregoing to indemnify each of them in relation to defama­tion, slander of title, breach of copyright, infringement of trademarks or names of publication titles, unfair competi­tion or trade practices, royalties or violation of rights or privacy AND WARRANT that the material complies with all relevant laws and regulations and that its publication will not give rise to any rights against or liabilities in the pub­lisher, its servants or agents and, in particular, that nothing therein is capable of being misleading or deceptive or otherwise in breach of Part V of the Trade Practices Act 1974.

Bookings - Tel: (03) 520 5555, Fax: (03) 521 364732 PROFESSIONAL COMPUTING, SEPTEMBER 1991

Page 19: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

ASSOCIATED INTEGRATED MAINFAAME

COMPUTERS P/LALL SYSTEMS AND UPGRADES

BUYING — SELLING — LEASING NEW OR REMARKETED

SYSTEMS & PERIPHERALSCONCURRENT

Fujrrsu

NEC

Ultimate

WYSE

AsSCKTATKI) lM'KIiR VI Kl) MainframkCOMPUTERS P L

NEW OR. REMARKETED

SYSTEMS & PERIPHERALS

ACCESSING A WORLDWIDE DEALER NETWORK CALL US NOW FOR YOUR SYSTEM REQUIREMENTSLevel 8, 35 Spring Street, Bondi Junction NSW 2022 Australia

PO Box 884 Bondi Junction NSW 2022 Australia Telephone: (+61 02) 369 9220 Facsimile: (+61 02) 387 1569

PROFESSIONAL COMPUTING, SEPTEMBER 1991 33

Page 20: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

support for additional network transport pro­tocols, such as TCR UDP and OSI protocols.

Sun sells a version of RPC TOOL called the ONC Application Tool Kit, and Netwise mar­kets versions for many other operating sys­tems. This enables the software developer to run the server as part of his application on a UNIX workstation, and the client side on a PC running DOS.

ONC and RPC TOOL are available today, providing a widely supported platform for de­veloping distributed applications in heteroge­neous networks. On the horizon are further enhancements to ONC which will ease MIS concerns about data security and consistency.

Sun has announced support for the Ker­beros method of authentication, which will inhibit unauthorised remote procedure calls or NFS requests. Sun will also enhance its net­work information service (NIS) to provide greater consistency between distributed NIS servers. In addition, several third-party ven­dors are now providing RPC-based applica­tions which provide automated back-up and fault-tolerance functionality across a mixed- vendor network.

Management of distributed networks is also becoming easier, thanks to the rapid accep­tance of the SNMP standard. Products such as SunNet Manager and AIM Technology’s Sharpshooter (an NFS management tool) greatly simplify the network’s job and reduce the cost of maintaining a distributed applica­tions environment.

This brings us to the final barrier slowing the trend to distributed computing: standards. Although a number of organisations have em­braced ONCs NFS and RPC/XDR as part of their Guidelines for Informatics Architecture, Open Software Foundation has espoused a competing technology, known as DCE, based on HP’s NCS remote procedure technology.

Recognising that customers do not want to be forced into a choice between the two tech­nologies, Sun and HP have announced that they will work towards a common applica­tions environment. This will provide a stan­dard applications programming interface for object-oriented programs, which will run over both ONC and NCS; the developer can write an application once, that will work on either distributed computing platform. HP and Sun will also cooperate in migrating both ONC and NCS to a common RPC, based on emerg­ing standards. 4

For procedure-based applications, the RPC TOOL will provide a similar protection. Netwise will maintain a compatible applica­tions programming interface that will support both the ONC RPC and the emerging ISO RPC standard.

“The network is the computer” remains a long-term vision. As corporations move to a fully-distributed computing model, they must contend not only with new technology but also with substantial organisational changes.

The role of the MIS professional will change from keeper of the corporate mainframe to systems integrator and network manager. But the business justifications, from lower mainte­nance costs to faster, more flexible informa­tion systems, are increasingly clear. Sun is committed to enhancing ONC and maintain­ing its position as the industry’s leading dis­tributed computing technology for open sys­tems.

ACS in VIEW

PRESIDENT’S MESSAGE:

From Alan Underwood

AS PRESIDENT of the ACS, I represent the Society on the Information Industries Roundtable. At a re­cent meeting of the Roundtable and after discussion with representative system integrators around Aus­tralia, the decision was taken to form a Research Association for the Information Industry.

The concept will provide a means for a number of firms in the information industry to jointly fund research of value to them.

The research normally would be undertaken on a contract basis by universities or public sector re­search organisations such as the CSIRO.

Given the competitive nature of the IT industry, co-operative research ventures inevitably will have to be pre-competitive, aiming to develop generic technologies which will be beneficial but not vital to member firms, or strategic research aiming to devel­op major new industrial opportunities faster and more effectively so that the whole industry may gain a quantum leap in its international competitiveness.

The discussion paper commissioned by the IIR and authored by Dr Ergad Gold includes a section on the success of other Australian research associa­tions. Dr Gold’s paper cites the major successes of research sponsored by the Australian Mining Indus­try Research Association (AMIRA) and the Austra­lian Wool Commission (AWC). These are signifi­cant in that they have a mutual interest in research which improves process efficiencies, extraction tech­nologies and technical management.

The Australian experience to date highlights the critical issue of funding. When a sustainable funding base is assured, the success of a research association revolves around the selection of appropriate R&D projects, the rigorous management of the fund allo­cation process, and the successful commercialisation of the intellectual property resulting from the re­search.

The ACS will assist the Roundtable in the devel­opment of the Research Association proposal, though personally I believe that there will be a limit­ed number of industry sponsors until the industry participants in Australia recognise the benefits ac­cruing from increased co-operation with each other. What are your views?

ALAN UNDERWOOD, MACS,PRESIDENT.Reference: A Research Association for the Informa­tion Industry’, a discussion paper prepared by Dr E Gold for the Information Industry Roundtable, Feb­ruary, 1991.

34 PROFESSIONAL COMPUTING, SEPTEMBER 1991

Page 21: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

ACS in VIEW

DR PATRICK Nugawela is a family physician in full-time medical practice in Perth. He has had intimate involvement in pri­

mary care computing for several years, having been a member of the National Computer Committee of a medical Royal College.

In 1988, he convened the First Pan-Pacific Medical Computing Conference in Singapore, in associa­tion with regional medical profes­

sional bodies from 11 countries. In addition, he has been actively in­volved in the formation of a regional medical informatics association in the Asia-Pacific region.

Dr Nugawela is Australia’s repre­sentative at the International Medi­cal Informatics Association (IMIA) and the national coordinator of the Australian Medical Informatics As­sociation. He has no commercial in­terests in computing. Dr Patrick Nugawela

TC4 - Medical Informatics

IFIP TC4 (Medical Informatics) is a most interesting Technical Committee. It does not exist. First, let me say in this personal perspective that “medical” is used in a

generic sense and incorporates the entire health-care profession and allied specialties.

The IFIP TC4 was one of the early TCs in IFIP’s history. The prolific interest it generat­ed resulted in the formation of IMIA, the International Medical Informatics Associa­tion, IMIA was initially a Special Interest Group of IFIP, and soon became a separate entity altogether at about the time Ashley Goldsworthy was world president. Now, IMIA is a member of IFIP and maintains a close liaison, still sharing the same secretariat in Geneva. This reciprocal relationship is borne out in Australian arrangements, as well.

The Australian Computer Society, which represented Australia at IFIP, also represent­ed Australia at IMIA at the outset. Currently, the partnership between the ACS and AMIA (Australian Medical Informatics Association) has produced a form of combined representa­tion, the result of a synergistic liaison be­tween the professional bodies.

Strategic AlliancesAMIA and the ACS enjoy a unique bilateral relationship, to the ultimate benefit of Aus­tralian medical informatics as an emerging discipline. AMIA is a national professional association, with strict criteria for general membership by way of academic qualifica­tion. AMIA is also the national health-care committee of the ACS and the Society is, in turn, a Foundation Member Organisation of the Association.

This reciprocal arrangement between the professionals who form the major compo­nents of‘medical’ and ‘informatics’ has creat­ed much interest overseas as it appears to have eluded many other well developed countries which have endeavored to achieve a similar union. Accordingly, the medical in­formatics professional may be an IT, medi­cal, legal or accounting professional. The per­son may also be a professional administrator or policy and planning executive.

The Society is thus well placed to be a focal point for the various other professions which may have an integral interest in IT. The in­volvement of the IT profession in these spe­cialised areas, often requiring multilateral ex­

pertise, will strengthen the basic infrastructure of IT as a profession, and en­hance its ability to influence trends and stan­dards in emerging IT-related specialities.

Australian Medical Informatics and the ACS Organisation of health-care within the ACS transcends at least three divisions: the social and legal (director Roger Clarke), technical (director Karl Reed) and international (direc­tor Alan Coulter). The ultimate involvement in professional development and quality as­surance is surely not far behind. These ad­ministrative distinctions, it should be noted, are often not clear-cut in health-care issues in IT. Take, for example, the smart card as a portable patient record. There are technical issues, international standards and profound ethical, social and legal obligations to consid­er in such data management.

In October this year, ACC’91 will incorpo­rate a one-day medical informatics stream which will focus on strategic issues in Austra­lian medical informatics today. Organised by AMIA in association with the ACS, it should affirm the Australian Computer Conferences pedigree as the major professional event in IT in Australia.

In due course Australia will, I am sure, follow international trends in formalising medical informatics as a professional disci­

pline, to the end of inaugurating an Austra­lian College of Medical Informatics.The International Scene The cooperative relationship between IFIP and IMIA internationally is reflected in Aus­tralian arrangements between the ACS and AMIA, as detailed above. Like the IFIP TCs, IMIA has various Working Groups.WG 1. EducationWG 3. Biosignal AnalysisWG 4. Data ProtectionWG 5. Primary CareWG 7. Biomedical Patterns RecognitionWG 8. Nursing InformaticsWG 9. Developing CountriesWG 10. Hospital Information SystemsWG 11. Dental InformaticsWG 12. Pharmaceutics & Pharmacology

Many of these WGs have various catego­ries (e.g. doctors, nurses, computer program­mers) and regional equivalents. Australia is represented on many of these. AMIA intends to develop parallel arrangements, liaising di­rectly with IMIA’s WGs and the ACS.

Various SIGs, too numerous to list, exist within this framework, for example, Coding and Classification (Nomenclatures), Software Engineering. Quality Assurance is an emerg­ing issue.

The Regional Scene

At regional level, Australia is a member of SEARCC through the ACS. Through SEARCC nations, the foundations for IMIA- Asia (a regional member of IMIA) are being sought. Australia is an active supporter of evolving arrangements.

In conclusion, the harnessing of the com­bined efforts of both the ACS and AMIA has placed medical informatics on a firm and respectable footing in Australia, in tandem with international standards. In addition, the Society is placed in a strategic position to become an umbrella organisation if it so chooses for all those professions with a bona fide interest in IT-related issues, be they med­ical or otherwise.

We hope to see you in Adelaide this Octo­ber.

Dr Patrick Nugawela Australian Computer Society

Australian Medical Informatics Association Tel. (09) 448 3022 Fax (09) 448 3817

PROFESSIONAL COMPUTING, SEPTEMBER 1991 35

Page 22: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

u

Would you like your courses promoted to over 12,000 information technology

professionals in Australia?

If your answer is “YES” return the coupon below and we will tell you how.Return to:ACS PCP Scheme Private Bag 4 Richmond Vic. 3121

□ Please send information on the ACS

□ Please send information on ACS Membership

□ Please sendinformation on how to promote my courses to 12,000 IT Professionals

Your name .............................................

Organisation...........................................

Address...................................................

Phone ...............................................

Fax......................................................

ACS Members do it better

‘PCPssst

‘PCPIt’s going well: a progress report

THE VICTORIAN Branch now has over 50 PCPs. The 50th PCP Cer­tificate will be awarded to long­term ACS Victorian member Fred

Royston at a breakfast meeting in Sep­tember. Geoff Dober, the chairman of the Continuing Education Committee which administers the PCP scheme, is delighted with the response to the PCP scheme. “Every ACS branch has had a good response to the scheme,” he said. “It is interesting to note that in member surveys, not all members are keen on the scheme but that is only to be expect­ed. We have introduced something new; it is a necessary step in moving to true professional society status and, eventu­ally, most members will realise its value. The number of our members who sup­port the scheme already is a good indi­cation that it will thrive.”

Dober, one of the two vice-presidents of ACS, is already a 1991 PCP. He also attained PCP status during the pilot in Victoria in 1990. Other ACS officials will soon be joining the PCP ranks, es­pecially those who will be attending the National Conference in Adelaide in Oc­tober. “The National Conference will earn attendees 21 PCP hours,” Dober said, “and our Branch offices are gearing up for a big increase in PCP claims fol­lowing that Conference. I expect our na­tional president, Alan Underwood, will qualify at the National Conference, if he has not already done so, and Vice-Presi­dent Garry Trinder will join him as a PCP”

ACS activities are not the only ones that are leading to PCP status for ACS members. Many members have claimed PCP hours for in-house courses and ac­tivities that are not listed in the PCP Directory. “There is little effort in­volved in placing your PCP claims,” Dober said. “If the activity a member attended is not listed in the Directory or in the newly distributed appendix to the Directory, then all that is necessary is for the member to enclose, with their claim, an outline of the activity attended so that we can assess if it adds to the

skills and knowledge of an ACS member and allocate the number of PCP hours.

We do prefer members to send their PCP claims after they have accrued their 30 hours. Some of the claims are asking for more than six hours a day, these are cut back to the maximum allowable 6 per day. Some members have claimed for what appears to be a marketing breakfast followed by a short session on the company and a look at a product. We assess those carefully because PCP is about updating the skills and knowledge of a member. Additional knowledge about products is obviously useful but we try to avoid allowing for the market­ing component.”

ACS is about to approach suppliers of courses to prepare the 1992 Directory which will be distributed to ACS mem­bers in November this year. ACS mem­bers have recently received an appendix to the Directory which was distributed in February. In addition to the 50 or so vendors in Volume 1 of the Directory, the appendix contains courses from 29 vendors, most of them not in the first Directory. “The vendor response has been exceptional;” Dober said. “More than one vendor has claimed that the PCP endorsement is the best value mar­keting they have ever had. Some of the course vendors in smaller states have been able to take their courses to other States, and most of the. vendors with courses listed in the Directory and the appendix are responsive to requests for in-house versions of their courses.”

Dober believes that the PCP scheme has made many employers more con­scious of the need to provide adequate professional development for their IT staff. “With PCP, we have made a strong statement that an IT professional needs at least 30 hours of continuing professional development each year.Some public service areas and some or­ganisations have built this into their staff development plans, many employ­ers are encouraging their staff to seek PCP status.”

36 PROFESSIONAL COMPUTING, SEPTEMBER 1991

Page 23: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

MicroWay Gives Programmers Heaps...Heaps of Information! Heaps of Support!Zinc Interface Library DiskDupe ProThree User Interfaces from One Set of Source Code - Microsoft Windows, DOS Graphics, and DOS Text.

Write only one set of source code and get automatic support for three different user interfaces.

Create true Microsoft Windows 3 applications without learning the complexities of the Windows environment. Zinc 2.0’s low-level interface to Windows 3 ensures maximum performance with minimum overhead.

Use Zinc Interface Library 2.0 to create a professional-looking, graphical user interface in DOS Graphics mode. You don’t need Windows or any other graphics-mode operating system. Zinc 2.0 supports Borland’s BGI Graphics routines and Zortech’s Flash Graphics routines (separate versions) to give you a full complement of graphical windows, menus, input fields, buttons and icons.

Support your installed base of 8088 and 8086 machines with older display adaptors through Zinc 2.0’s DOS Text display class. Zinc automatically supports 25x40, 25x80, 43x80 and 50x80 display modes. If you have a mixed customer base of older and newer machines then simply create one EXE. Zinc Interface Library uses the run-time binding capability of C++ to support both text and graphics modes.

DiskDupe Pro provides a faster and easier way to copy your floppy disks. DiskDupe duplicates, formats and compares disks amazingly fast on your IBM compatible computer.No Disk Swapping.To duplicate a disk, insert the master disk in your floppy drive and DiskDupe reads it into your computer’s RAM memory and, if necessary, onto your hard disk. Then DiskDupe makes as many copies as you want without having to re-read the master disk. DiskDupe Pro can support up to four disk drives switching between them automatically. Duplication Acceleration.DiskDupe uses advanced DMA and multi-tasking techniques to accelerate duplication speed. DiskDupe can read the hard disk at the same time it is writing to a floppy, and duplicate from a hard disk image file without slowing down.Easy Master Changes.DiskDupe can easily make a few copies of lots of masters. Just use the Relay feature and write protect each master. DiskDupe will then automatically recognise each master from the write protect tab, and duplicate that master onto each following disk. When the next write protected disk is reached, DiskDupe treats this as a new master, and the process continues.200 Disks An Hour.For faster duplicating, use DiskDupe to format disks ahead of time. Then when the master arrives, duplicate it onto 200 disks in only one hour.

Multi-Edit ProfessionalMulti-Edit is a complete program development environment.Features include: Editing up to 100 files of unlimited size (up to 2 billion lines, line length up to 2048) or different sections of the same file in up to 100 windows; fully reconfigurable on the fly; alternate keystroke emulation for WordStar & Brief included; powerful, easy to learn macro language; extensive on-line, context-sensitive Hypertext help; full UNDO (up to 65,535 operations); REDO; line, stream & column block operations, including inter­window move and copy and block math; full regular expression Search and Replace; multi-file Search across directories; unlimited keystroke macros; compile within editor with auto-error location (uses EMS to free up room for BIG compiles); multiple directory DOS shell; individual setups on filename extensions; Auto/Smart indenting, construct matching and template editing for C, Pascal, BASIC, Assembler, Modula-2 and dBASE extensive word-processing features; line draw mouse support; Communications module; Spell Checker; Source Code for all system macros; source-level macro debugger; support for popular version control software and much more.

NORMALLY -$3@*T NORMALLY $323" NORMALLY $350"

NOW $399 NOW $229 NOW $249SIMPLY TICK A BOX, MAIL OR FAX THIS SHEET TO US AND WE'LL RUSH YOU THE SOFTWARE]] PLEASE RUSH ME . . . . . . . . COPIES OF ZINC INTERFACE LIBRARY NAME:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

WAS $598 NOW $399□ PLEASE RUSH ME . . . . . . . . COPIES Of MULTI-EDIT PROFESSIONAL

WAS $350 NOW $249□ PLEASE RUSH ME . . . . . . . . COPIES OF DISKDUPE PRO

WAS $323 NOW $229CH PLEASE SEND ME A FREE MICROWAY RESOURCE CATALOG

PLEASE CHARGE TO MY CREDIT CARD:L] MASTERCARD Q VISA Q BANK CARD Q AMEX

CARD No:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . EXPIRY DATE:. . . . . . . . . . . . . . . . . .

POSITION:

COMPANY:.

ADDRESS:..

SIGNATURE:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PHONE:.PLEASE INCLUDE SI5 PEE OEDEE FOE FEEIGHT

FAX:

* OFFER EXPIRES OCT. 20th 1991.

Increase your efficiency and save time and money. The major components of your application are already written and available today! Phone or Fax.

MICRO WAVPROGRAMMERS SOFTWARE

P.O. BOX 219 MOORABBIN,VICTORIA, 3189 PHONE: (03) 555 4544 FAX: (03) 555 9871

A.C.N. 006 439 961

AD

MA

R /

MW

/ 03

3

Page 24: COMPUTING PROFESSIONAL - Australian Computer Society...computing, client/server computing, and networked computing. Collective computing more precisely emphasises several important

In a hostile environment

believe in the power.It’s rough out there, and what’s more frightening, it’s unpredictable. There are many

contributing factors to power fluctuations and some are not as obvious as a severe storm. The realisation that the information stored in your computer is much more valuable than the

hardware is an important factor when considering the purchase of power protection equipment. Clean Line Systems has a significant product range to meet demands across the office,

computer room and plant room operations, offering outputs as low as 300 VA to as high as 2000 kVA. This is backed by Clean Line Systems’ Customer Care System with advice,

installation, maintenance, and guaranteed response time service contracts.Call us today for information on our range of Power Line Conditioners, Off-Line and On-Line Uninterruptible Power Supply Systems and CONTROL YOUR HOSTILE ENVIRONMENT.

CLEAN LINE SYSTEMS Pty Ltd2/10 Chard Road, BROOKVALE NSW 2100 AUSTRALIA Phone: (02) 905 0016 Fax: (02) 905 5338 Toll Free: (008) 252 824Division of Chloride Power Electronics international

Please phone our Toll Free number for further information or a copy

of our ‘Power Problems’ booklet.