74
1 COMPUTER ORGANIZATION AND OPERATING SYSTEMS 1. COMPUTER ORGANIZATION SYSTEMS 1.1 What is Computer? A computer is a programmable machine designed to automatically process a sequence of various arithmetic or logical operations. The interface between the computer and the human operator is known as the user interface. A computer consists of memory which stores information and data in the form of text, images and graphics, and audio and video files. CPU or Central Processing Unit performs the arithmetic and logic operations with the help of a sequencing and control unit that can change the order of operations based on the information that has been stored in memory. Peripheral devices allow information to be entered from an external source and allow the results of operations to be sent out. A Central Processing Unit or CPU executes a series of instructions to read, manipulate and store the data. The control unit, Arithmetic Logic Unit or ALU, memory registers and basic Input/ Output or I/O devices are collectively known as a Central Processing Unit or CPU. Devices that provide input or output to the computer are known as peripherals. On a Personal Computer or PC, peripherals include input devices, such as the keyboard and mouse, and output devices, such as visual display unit or monitor and printer. Hard disk drives, floppy disk drives and optical disk drives serve as memory devices. A graphics processing unit is used to display 3-Dimensional or 3-D graphics. Modern desktop computers contain various smaller computers that assist the main CPU in performing I/O operations. Memory refers to the physical devices which are used to store programs, i.e., sequences of instructions or data, in a computer system. Data is stored either in hard disk or in secondary memory devices, such as tape, magnetic disks, optical disks, Compact Disk Read Only Memory (CD-ROM) and Digital Versatile/Video Disc (DVD-ROM). Memory is associated with addressable semiconductor memory, i.e., integrated circuits consisting of silicon based transistors, used for example as primary memory but also other purposes in computers and other electronics devices. Basic Functions of a Computer There are three basic functions of a computer are as follows: Data Processing: A computer must be able to process data. Data Storage: A computer must be able to store data. Even if data is supplied to a computer on the fly, for processing and producing the result immediately, the computer must be able to store that data temporarily. Apart from short term data storage, it is equally important for a computer to perform a long term storage function to store different files. Data Movement: A computer must be able to move data between itself and the outside world. The computer operating environment consists of devices that serve as data sources or destinations. When data is received from or delivered to a machine that is directly linked to a computer, the process is known as input/output and the devices used for this purpose are referred as input/output devices. When data moves over longer distances, to or from a remote machine, the process is known as data communication. Functional Units of a Computer In its simplest form, a computer consists of five functionally independent components, namely, input, output, memory, arithmetic logic unit and control unit. A computer accepts information in the form of a program and data

Unit 1

Embed Size (px)

Citation preview

Page 1: Unit 1

1 COMPUTER ORGANIZATION AND

OPERATING SYSTEMS

1. COMPUTER ORGANIZATION SYSTEMS

1.1 What is Computer?

A computer is a programmable machine designed to automatically process a sequence of various arithmetic or

logical operations. The interface between the computer and the human operator is known as the user interface.

A computer consists of memory which stores information and data in the form of text, images and graphics, and

audio and video files. CPU or Central Processing Unit performs the arithmetic and logic operations with the help

of a sequencing and control unit that can change the order of operations based on the information that has been

stored in memory. Peripheral devices allow information to be entered from an external source and allow the

results of operations to be sent out. A Central Processing Unit or CPU executes a series of instructions to read,

manipulate and store the data. The control unit, Arithmetic Logic Unit or ALU, memory registers and basic Input/

Output or I/O devices are collectively known as a Central Processing Unit or CPU. Devices that provide input or

output to the computer are known as peripherals. On a Personal Computer or PC, peripherals include input

devices, such as the keyboard and mouse, and output devices, such as visual display unit or monitor and printer.

Hard disk drives, floppy disk drives and optical disk drives serve as memory devices. A graphics processing unit

is used to display 3-Dimensional or 3-D graphics. Modern desktop computers contain various smaller computers

that assist the main CPU in performing I/O operations. Memory refers to the physical devices which are used to

store programs, i.e., sequences of instructions or data, in a computer system. Data is stored either in hard disk or

in secondary memory devices, such as tape, magnetic disks, optical disks, Compact Disk Read Only Memory

(CD-ROM) and Digital Versatile/Video Disc (DVD-ROM). Memory is associated with addressable semiconductor

memory, i.e., integrated circuits consisting of silicon based transistors, used for example as primary memory but

also other purposes in computers and other electronics devices.

Basic Functions of a Computer

There are three basic functions of a computer are as follows:

• Data Processing: A computer must be able to process data.

• Data Storage: A computer must be able to store data. Even if data is supplied to a computer on the fly, for

processing and producing the result immediately, the computer must be able to store that data temporarily.

Apart from short term data storage, it is equally important for a computer to perform a long term storage

function to store different files.

• Data Movement: A computer must be able to move data between itself and the outside world. The

computer operating environment consists of devices that serve as data sources or destinations. When data

is received from or delivered to a machine that is directly linked to a computer, the process is known as

input/output and the devices used for this purpose are referred as input/output devices. When data moves

over longer distances, to or from a remote machine, the process is known as data communication.

Functional Units of a Computer

In its simplest form, a computer consists of five functionally independent components, namely, input, output,memory, arithmetic logic unit and control unit. A computer accepts information in the form of a program and data

Page 2: Unit 1

2 Computer Organization and Operating Systems

through its input unit, which can be an electromechanical device such as a keyboard or from other computersover digital communication lines. The information received by the computer is either stored in the memory forlater reference or used immediately by the ALU or Arithmetic Logic Unit for performing the desired operations.Finally, the processed information in the form of results is displayed through an output unit. The control unitcontrols all the activities taking place inside the computer. The ALU along with the control unit are collectivelyknown as the CPU or processor, and the input and output units are collectively known as the Input/Output (I/O)unit.

• Input Unit: A computer accepts input in coded form through an input unit. The keyboard is an inputdevice. Whenever a key is pressed, the binary code of the corresponding letter or digit is transferred tothe memory unit or processor. Other types of input devices are mouse, punch card, joysticks, etc.

• Memory Unit: The task of the memory unit is to safely store programs as well as input, output andintermediate data. The two different classes of memory are primary and secondary storage. Theprimary memory or the main memory is part of main computer system. The processor or the CPUdirectly stores and retrieves information from it. Primary storage contains a large number of semiconductorcells capable of storing one bit of information. A group (of fixed size) of these cells is referred as wordsand the number of bits in each word is referred as word length which typically ranges from 16 to 64bits. When the memory is accessed, usually one word of data is read or written. Secondary memory isnot directly accessible by the CPU. Secondary memory devices include magnetic disks like hard drivesand floppy disks; optical disks, such as CD-ROMS and magnetic tapes.

• Processor Unit: The processor unit performs arithmetic and other data processing tasks as specifiedby a program.

• Control Unit: It oversees the flow of data among the other units. The control unit retrieves theinstructions from a program (one by one) which are safely kept in the memory. For each instruction,the control unit tells the processor to execute the operation marked by the instruction. The control unitsupervises the program instructions and the processor manipulates the data specified by the programs.

• Output Unit: The output unit receives the result of the computation which is displayed on the screenor printed on paper using a printer.

Execution of programs is the main function of the computer. The programs or the set of instructions arestored in the computer's main memory and are executed by the CPU. TheCPU processes the set of instructions along with any calculations andcomparisons that are required to complete the task. Additionally, the CPUcontrols and activates various other functions of the computer system. Italso activates the peripherals to perform input and output functions.

The CPU consists of three major components as shown in Figure.The register set (associated with the main memory) that stores thetransitional data while processing the programs and commands, ALU whichperforms the necessary microoperations for processing the programs andcommands and the control unit that controls the transmitting of information

amongst the registers and directs the ALU on the instructions to follow.

Control Unit

The control unit not only plays a major role in transmitting data from a device to the CPU and vice versa but alsoplays a significant role in the functioning of the CPU. It actually does not process the data but manages andcoordinates the entire computer system including the input and the output devices. It retrieves and interprets thecommands of the programs stored in the main memory and sends signals to other units of the system for execution.It does this through some special purpose registers and a decoder. The special purpose register called the instructionregister holds the current instruction to be executed and the program control register holds the next instruction to

Arithmetic LogicUnit (ALU)

Memory Unit

Control Unit

Major Components of CPU

Page 3: Unit 1

Computer Organization and Operating Systems 3

be executed. The decoder interprets the meaning of each instruction supported by the CPU. Each instruction isalso accompanied by a microcode, i.e., the basic directions to tell the CPU how to execute the instruction.

Arithmetic and Logic Unit or ALU

The ALU is responsible for arithmetic and logic operations. This means that when the control unit encounters aninstruction that involves an arithmetic operation (add, subtract, multiply, divide) or a logic operation (equal to, lessthan, greater than), it passes control to the ALU which has the necessary circuitry to carry out these arithmeticand logic operations.

Figure below represents the basic structure of a CPU.

Arithmetic LogicUnit (ALU)

Accumulator(AC)

DataRegister (DR)

ProgramCounter (PC)

InstructionRegister (IR)

Memory AddressRegister (MAR)

Control Unit

Control Signals

Program Control Unit

Data Processing Unit

To/ From Main Memoryor Input/Output

Devices

Basic Structure of a CPU

As an example, a comparison of two numbers (a logical operation) may require the control unit to load the

two numbers in the requisite registers and then pass on the execution of the ‘compare’ function to the ALU.

1.2 Evolution and Generation of Computer

The first mechanical adding machine was invented by Blaise Pascal in 1642. Later, in 1671, Baron Gottfried

Wilhelm von Leibniz of Germany invented the first calculator for multiplication. Around this time, Herman Hollerithcame up with the concept of punched cards, which were extensively used as an input medium in mechanicaladding machines.

Charles Babbage, a 19th century professor at Cambridge University, is considered the father of the moderndigital computer. During this period, mathematical and statistical tables were prepared by a group of clerks.However, utmost care and precautions could not eliminate human errors.

In 1842, Babbage came up with his new idea of the Analytical Engine which was intended to be completelyautomatic. This machine was capable of performing basic arithmetic functions. But, these machines were difficultto manufacture because the precision engineering required to manufacture them was not available at that time.

The following is a brief description of the evolution of computers over the years.

• Mark I Computer (1937-44): This was the first fully automatic calculating machine designed byHoward A. Aiken and the design of this computer was based on the technique of punching cardmachinery. In this technique, both mechanical and electronic components were used.

Page 4: Unit 1

4 Computer Organization and Operating Systems

• Atanasoff-Berry Computer (1939-42): This computer was developed by Dr. John Atanasoff to

solve certain mathematical equations. It used forty five vacuum tubes for internal logic and capacitors

for storage.

• ENIAC (1943-46): The Electronic Numerical Integrator and Computer (ENIAC) was the firstelectronic computer developed for military requirements and was used for many years to solve ballisticproblems.

• EDVAC (1946-52): One of the drawbacks of ENIAC was that its programs were wired on boardswhich made it difficult to change them. To overcome the drawbacks of ENIAC, the Electronic DiscreteVariable Automatic Computer (EDVAC) was designed. The basic idea behind this concept was tostore sequences of instructions in the memory of the computer for automatically directing the flow ofoperations.

• EDSAC (1947-49): Professor Maurice Wilkes developed the Electronic Delay Storage AutomaticCalculator (EDSAC) by which addition and multiplication operations could be accomplished.

• UNIVAC I (1951): The UNIVersal Automatic Computer (UNIVAC) was the first digital computerto be installed in the Census Bureau in 1951 and was used for a decade.

Generation of Computers

Generation Time Hardware Software Features Examples

I 1942-

1955

Vacuum Tubes Machine Language (Binary Language)

High speed electronic switching device; memory type was electromagnetic; bulky in size; generated a large amount of heat; frequent technical faults; required constant maintenance; used for scientific purposes; air conditioning required

ENIAC, EDVAC, EDSAC, UNIVAC I

II 1955-

1964

Transistors High level languages

FORTRAN,

COBOL,

ALGOL,

SNOBOL

Better electronic switching devices than vacuum tubes; made of germanium semiconductors; memory type was magnetic cores; powerful and more reliable; easy to handle; much smaller than vacuum tubes; generated less heat as compared to vacuum tubes; used for business and industries for commercial data processing; air conditioning required

Livermore Atomic Research Computer (LARC),

IBM

III 1964-

1975

Integrated

Circuits (ICs) made up of transistors, resistors and capacitors fixed on single silicon chip

High level languages

PL/1,

PASCAL,

BASIC, VISUAL BASIC, C, C++, C#, Java

ICs were smaller than transistors; consumed less power; dissipated less heat as compared to transistors; more reliable and faster than earlier generations; capable of performing about 1 million instructions per second; large storage capacity; used for both scientific and commercial purposes; air conditioning required

Mainframe,

Minicomputers

IV 1975-

1989

Microprocessor

made up of Large Scale Integration (LSI) Circuits and Very Large Scale Integration (VLSI) Circuits

Advanced Java (J2EE, JDO, JavaBeans), PHP, HTML, XML, SQL

Microprocessor had control on logical instructions and memory; semiconductor memories; personal computers were assembled; used in LAN and WAN to connect multiple computers at a time; used graphical user interface; smaller, more reliable and cheaper than third generation computers; larger primary and secondary storage memories; had Computer Supported Cooperative Working (CSCW); air conditioning not required

Personal Computers (PCs),

LAN,

WAN,

CSCW

V 1989-

Present

Ultra Scale Large Integration (USLI),

Optical Disks

Artificial Intelligence,

PROLOG,

OPS5,

Mercury

PCs were assembled; portable and non-portable, powerful desktop PCs and workstations; less prone to hardware failure; user-friendly features – Internet, e-mailing; air conditioning not required

Portable PCs, Palmtop Computers,

Laptop

Page 5: Unit 1

Computer Organization and Operating Systems 5

In 1952, International Business Machines (IBM) introduced the 701 commercial computers. Thesecomputers were used for scie ntific and business purposes.

The size, shape, cost and performance of computers have changed over the years, but the basic logicalstructure has not. Any computer system essentially consists of three important parts, namely, input device, CPUand output device. The CPU itself consists of the main memory, the arithmetic logic unit and the control unit.

Table 1.1 will help you understand the generation of computers.

1.3 Types of Computer

A computer is a general purpose device which can be programmed to carry out a finite set of arithmetic and

logical operations. Computers can be classified on the basis of their size, processing speed and cost.

According to data processing capabilities computers are classified as analog, digital and hybrid.

Analog

Analog computers are generally used in industrial process controls and to measure physical quantities, such aspressure, temperature, etc. An analog computer does not operate on binary digits to compute. It works oncontinuous electrical signal inputs and the output is displayed continuously. Its memory capacity is less and canperform only certain type of calculations. However, its operating speed is faster than the digital computer as itworks in a totally different mode.

Analog computers perform computations using electrical resistance, voltage, etc. The use of electricalproperties signifies that the calculations can be performed in real time or even faster at a significant fraction ofthe speed of light. Typically, an analog computer can integrate a voltage waveform using a capacitor whichultimately accumulates the charge. The basic mathematical operations performed in an electric analog computerare summation, inversion, exponentiation, logarithm, integration with respect to time, differentiation with respectto time, multiplication and division. Hence in the analog computers, an analog signal is produced which is composedof Direct Current or DC and Alternating Current or AC magnitudes, frequencies and phases. The starting operationsin an analog computer are done in parallel. Data is represented as a voltage that is a compact form of storage.

Digital

Digital computers are commonly used for data processing and problem solving using specific programs. A digitalcomputer is designed to proces data in numerical form. It is in the discrete form from one state to the next. Theseprocessing states involve binary digits which acquire the form of the existence or nonexistence of magneticmarkers in a standard storage devices, ON/OFF switches or relays. In a digital computer, letters, words, symbolsand complete texts are digitally represented, i.e., using only two digits 0 and 1. It processes data in discrete formand has a large memory to store huge quantity of data.

The functional components of a typical digital computer system are input/output devices, main memory,control unit and arithmetic logic unit. The processing of data in a digital computer is done with the help of logicalcircuits which are also termed as digital circuits. All the circuits processing data inside a computer function in anextremely synchronized mode which is further controlled using a steady oscillator acting as the computer’s‘clock’. The clock rate of a typical digital computer ranges from several million cycles per second to severalhundred million cycles, whereas the clock rate of fastest digital computers are about a billion cycles per second.Hence, the digital computers operate on very high speed and are able to perform trillions of logical or arithmeticoperations per second to provide quick solution to problems which is not possible for a human being to domanually.

Hybrid

Hybrid computers are the combination of digital and analog computers. A hybrid computer uses the best featuresof digital and analog computers. It helps the user to process both continuous and discrete data. Hybrid computersare generally used for weather forecasting and industrial process control.

Page 6: Unit 1

6 Computer Organization and Operating Systems

The digital component basically functions as a controller to provide logical operations whereas the analog

component functions as a solver to provide solutions of differential equations. Remember that the hybrid computers

are different from hybrid systems. The hybrid system is a digital computer equipped with an analog-to-digital

converter for input and a digital-to-analog converter for output. The term ‘hybrid computer’ signifies a mixture of

different digital technologies to process specific applications with the help of various specific processor technologies.

According to purpose, computers are either general purpose or specific purpose.

Micro, Mini, Mainframe and Supercomputers

On the basis of the size, computers are classified as micro, mini, mainframe and supercomputers.

Microcomputers

Microcomputers are developed from advanced computer technology. They are commonly used at home, classroom

and in the workplace. Microcomputers are called home computers, personal computers, laptops, personal digital

assistants, etc. They are powerful and easy to operate. In recent years, computers were made portable and

affordable. The major characteristics of a microcomputer are as follows:

• Microcomputers are capable of performing data processing jobs and solving numerical programs.

Microcomputers work rapidly like minicomputers.

• Microcomputers have reasonable memory capacity which can be measured in megabytes.

• Microcomputers are reasonably priced. Varieties of microcomputers are available in the market as per the

requirement of smaller business companies and educational institutions.

• Processing speed of microcomputers is measured in MHz. A microcomputer running at 90MHz worksapproximately at 90 MIPS (Million Instructions Per Second).

• Microcomputers have drives for floppy disk, compact disk and hard disks.

• Only one user can operate a microcomputer at a time.

• Microcomputers are usually dedicated to one job. Millions of people use microcomputers to increase theirpersonal productivity.

• Useful accessory tools, such as clock, calendar, calculator, daily schedule reminders, scratch pads, etc.,are available in a microcomputer.

• Laptop computers, also called notebook computers, are microcomputers. They use the battery powersource. Laptop computers have a keyboard, mouse, floppy disc drive, CD drive, hard disk drive andmonitor. Laptop computers are expensive in comparison to personal computers.

Personal Computers

A PC is a small single user microprocessor based computer that sits on your desktop and is generally used athomes, offices and schools. As the name implies, PCs were mainly designed to meet the personal computing

needs of individuals. Personal computers are used for preparing normal text documents, spreadsheets with predefined

calculations and business analysis charts, database management systems, accounting systems and also for designingoffice stationary, banners, bills and handouts. Children and youth love to play games and surf the Internet,

communicate with friends via e-mail and the Internet telephony, and do many other entertaining and useful tasks.

The configuration varies from one PC to another depending on its usage. However, it consists of a CPU orsystem unit, a monitor, a keyboard and a mouse. It has a main circuit board or motherboard (consisting of the

CPU and the memory), hard disk storage, floppy disk drive, CD-ROM drive and some special add-on cards like

Network Interface Card or NIC and ports for connecting peripheral devices like printers.

Page 7: Unit 1

Computer Organization and Operating Systems 7

PCs are available in two models—desktop and tower. In the desktop model, the monitor is positioned on

top of the system unit whereas in the tower model the system unit is designed to stand by the side of the monitor

or even on the floor to save desktop space. Due to this feature, the tower model is very popular.

Some popular operating systems for PCs are MS DOS, Microsoft Windows, Windows NT, Linux andUNIX. Most of these operating systems have the capability of multitasking which eases operation and saves timewhen a user has to switch between two or more applications while performing a job. Some leading PC manufacturers

are IBM, Apple, Compaq, Dell, Toshiba and Siemens.

Types of Personal Computers

Notebook/Laptop Computers

Notebook computers are battery operated personal computers. Smaller

than the size of a briefcase, these are portable computers and can be

used in places like libraries, in meetings or even while travelling. Popularly

known as laptop computers, or simply laptops, they weigh less than 2.5 kg

and can be only 3 inches thick (refer Figure). Notebook computers are

usually more expensive as compared to desktop computers though they

have almost the same functions, but since they are sleeker and portable,

they have a complex design and are more difficult to manufacture. These

computers have large storage space and other peripherals, such as serial

port, PC card, modem or network interface card, CD-ROM drive and

printer. They can also be connected to a network to download data from

other computers or to the Internet. A notebook computer has a keyboard, a flat screen with Liquid Crystal

Display (LCD) display and can also have a trackball and a pointing stick.

A notebook computer uses the MS DOS or Windows operating system. It is used for making presentations

as it can be plugged into an LCD projection system. The data processing capability of a notebook computer is as

good as an ordinary PC because both use the same type of processor, such as an Intel Pentium processor.

However, a notebook computer generally has lesser hard disk storage than a PC.

Tablet PC

Tablet PC is a mobile computer that looks like a notebook or a small writing slate but uses a stylus pen or your

finger tip to write on the touch screen. It saves whatever you scribble on the screen with the pen in the same way

as you have written it. The same picture can than be converted to text with the help of a HR (Hand Recognition)

software.

PDA

A Personal Digital Assistant (PDA) is a small palm sized hand-held computer which has a small color touch

screen with audio and video features. They are nowadays used as smart phones, Web enabled palmtop computers,

portable media players or gaming devices.

Most PDAs today typically have a touch screen for data entry, a data storage/memory card, Bluetooth,

Wireless Fidelity (Wi-Fi) or an infrared connectivity and can be used to access the Internet and other networks.

1.4 Computer Hardware

The physical components which you can see, touch and feel in computer system are called hardware.

Motherboard: A motherboard is the main PCB (Printed Circuit Board), sometimes alternatively known as a

logical board or a main board of a Personal Computer or in fact any complex electronic system. It is basically a

Foldable flat screen

Fig. 1.3 Laptop Computer

Page 8: Unit 1

8 Computer Organization and Operating Systems

flat fibreglass platform which hosts the CPU (CentralProcessing Unit), the main electronic components, devicecontroller chips, main memory slots, slots for attaching thestorage devices and other subsystems.

Sockets and Ports

Main Power Socket: On the top part of the rear of yourcomputer system, you will find the main power cable socketwhich supplies power from the electric mains to the computersystem. This socket is the part of the main power supply unitof your computer (refer Figure).

Monitor Power Socket: Right below the main powercable socket is the socket that supplies the power from thecomputer system to the computer monitor. In some computerswhere you might not find this socket, you can plug in themonitor directly in main power supply.

PS/2 Mouse Port: Next you will find a small round

green colored port with six pin connector and a small logo of

the mouse printed next to it. This is where your PS/2 Mouse

will be plugged in.

PS/2 Keyboard Port: Right next to it you will find another similar purple colored port with the keyboard

logo printed next to it. This is where your PS/2 keyboard will be plugged in.

Fan Housings: You will notice two fan housings at the back of your computer. One fan housing is a part

of the power supply unit and the other will be somewhere below it to cool off the heat generated by the CPU.

Serial Ports: It is a 9-pin connector normally used to attach the old serial port mouse, hand-held scanners,

modems, joysticks, game pads and other such devices.

Parallel Port: It is a 25-pin connector used to attach parallel port printers, modems, external hard disk

drives, etc.

Audio Jacks: There are three audio jacks in your computer system. One jack is used for connecting your

speakers or headphones, the second is used to connect the microphone and the third to connect to another audio

device, such as a music system.

Local Area Network or LAN Port: The LAN port is where the Registered Jack or RJ-45 connector of

your LAN cable is plugged in to connect your computer to other computers or the Internet.

Universal Serial Bus or USB Ports: The USB port is designed to connect multiple peripheral devices in

a single standardized interface and has a plug and play option that allows devices to be connected or disconnected

without restarting or turning off the computer. It can be used for many serial and parallel port devices, such as

mouse, printers, modems, joysticks, game pads, scanners, digital cameras and other such devices.

VGA Port: It is a 15-pin connector that connects the signal cable of the monitor to the computer.

Memory

Memory is used for storage and retrieval of instructions and data in a computer system. The CPU contains

several registers for storing data and instructions. But these can store only a few bytes. If all the instructions and

data being executed by the CPU were to reside in secondary storage, such as magnetic tapes and disks, and

loaded into the registers of the CPU as the program execution proceeded, it would lead to the CPU being idle for

most of the time, since the speed at which the CPU processes data is much higher than the speed at which data

can be transferred from disks to registers. Every computer thus requires storage space where instructions and

Fan Housing

Parallel Port

Audio Jacks

USB Ports

VGA Port

PS/2 Keyboard

A/C Power Input

Serial Port

LAN Port

PS/2 Mouse

Main Power Socket

Page 9: Unit 1

Computer Organization and Operating Systems 9

data of a program can reside temporarily when the program is being executed. This temporary storage area is

built into the computer hardware and is known as the primary storage or main memory. Devices that provide

backup storage, such as magnetic tapes and disks are called secondary storage or auxiliary memory. A memory

system is mainly classified into the following categories.

Internal Processor Memory: This is a small set of high speed registers placed inside a processor and

used for storing temporary data while processing.

Primary Storage Memory: This is the main memory of the computer which communicates directly with the

processor. This memory is large in size and fast, but not as fast as the internal memory of the processor. It

comprises a couple of integrated chips mounted on a printed circuit board plugged directly on the motherboard.

RAM is an example of primary storage memory.

Secondary Storage Memory: This stores all the system software and application programs and is basically

used for data backups. It is much larger in size and slower than primary storage memory. Hard disk drives, floppy

disk drives and flash drives are few examples of secondary storage memory.

Memory Capacity: Capacity, in a computer system, is defined in terms of the number of bytes that it can store

in its main memory. This is usually stated in terms of KiloBytes (KB) which is 1024 bytes or MegaBytes (MB)

which is equal to 1024 KB (10,48,576 bytes). The rapidly increasing memory capacity of computer systems has

resulted in defining the capacity in terms of GigaBytes (GB) which is 1024 MB (1,07,37,41,824 bytes). Thus a

computer system having a memory of 256 MB is capable of storing (256 × 1024 × 1024) 26,84,35,456 bytes or

characters.

Registers

The primary task that the CPU performs is the execution of instructions. It executes every instruction by means

of a number of small operations known as microoperations. Thus, it can be seen that:

• The CPU needs an extremely large main memory.

• The speed of the CPU must be as fast as possible.

To understand further, let us define two relevant terms:

• Memory Cycle Time: Time taken by the CPU to access the memory

• Cycle Time of the CPU: The time that the CPU takes for executing the shortest well defined

microoperation

It has been observed that the time taken by the CPU to access the memory is about 1-10 times higher

than the time that the CPU takes for executing the shortest well-defined microoperation. Therefore,

CPU registers serve as temporary storage areas within the CPU. CPU registers are termed as fast

memory and can be accessed almost instantaneously.

Further, the number of bits a register can store at a time is called the length of the register. Most CPUs

sold today have 32-bit or 64-bit registers. The size of the register is also called the word size and it

indicates the amount of data that a CPU can process at a time. Thus, the bigger the word size, the

faster the computer can process data.

The number of registers varies among computers but typical registers found in most computers include:

• Memory Buffer Register: When data is received from the memory it is temporary held in theMemory Buffer Register or MBR.

• Memory Address Register: The memory location’s address where data is to be stored (in case ofwrite operations) and the location from where data is to be accessed (in case of read operations) isspecified by Memory Address Register or MAR.

Page 10: Unit 1

10 Computer Organization and Operating Systems

• Accumulator: Interactions with the ALU are carried out by the Accumulator or AC, in which theoutput and input operands are stored. This register, therefore, holds the initial data to be operated upon,the intermediate results and the final results of processing operations.

• Program Counter: The next instruction to be executed subsequent to the execution of current instructionis tracked by the Program Counter or PC.

• Instruction Register: Instructions are loaded in the instruction register prior to being executed, i.e.,the instruction register holds the current instruction that is being executed.

Processors used in PCs

The Central Processing Unit or the CPU is the most important component of the computer. The CPU itself is an

internal part of the computer system and is usually a microprocessor based chip housed on single or at times

multiple printed circuit boards. The CPU is directly inserted on the motherboard and each motherboard is compatible

with a specific series of CPUs only. The CPU generates a lot of heat and has a heat sink, and a cooling fan

attached on the top which helps it to disperse heat.

The market of microprocessors is dominated primarily by Intel and AMD, both of which manufacture

International Business Machines or IBM compatible CPUs. Motorola also manufactures CPUs for Macintosh

based PCs. Cyrix, another IBM compatible CPU manufacturer is next in line after Motorola in the market, in

terms of global sales.

Types of Processors

The brands of CPUs listed are not the only differentiating factors between different processors. There are

various technical aspects to these processors which allow us to differentiate between CPUs of different power,

speed and processing capability. Accordingly, each of these manufacturers sells numerous product lines offering

CPUs of different architecture, speed, price range, etc. The following are the most common aspects of modern

CPUs that enable us to judge their quality or performance:

• 32 or 64-bit Architecture: A bit is the smallest unit of data that a computer processes. 32 or 64-bit

architecture refers to the number of bits that the CPU can process at a time.

• Clock Rate: The speed at which the CPU performs basic operations, measured in Hertz (Hz) or in

modern computers MHz or GHz.

• Number of Cores: CPUs with more than one core are essentially multiple CPUs running in parallel to

enable more than one operation to be performed simultaneously. Current ranges of CPUs offer up to eight

cores. Currently, the Dual core (i.e., two cores) CPU is most commonly used for standard desktops and

laptops, and Quad core (i.e., four cores) is popular for entry level servers.

• Additional Technology or Instruction Sets: These refer to unique features that a particular CPU or

range of CPUs offer to provide additional processing power or reduced running temperature. These range

from Intel’s MMX, Streaming Single Instruction Multiple Data Extension or SSE3, and HT to AMD’s

3DNOW and Cool n Quiet.

The various types of popular, high performing and cost efficient CPUs ranging from the last decade to the

present are given below:

Intel Processors

• Intel 8086, 80286, 80386 & 80486 ( Discontinued line ).

• Intel Pentium 1, 2, 3 & 4 ( Single Core, 32-bit ).

• Intel Celeron and Celeron D (Single Core).

• Intel Celeron D and Pentium 4 ( Pentium D – Dual Core, 2 sets of L1 and L2 caches).

Page 11: Unit 1

Computer Organization and Operating Systems 11

• Intel Xeon, Xeon MP and Itanium (Dual / Quad-Core, 64-Bit & L1, L2, L3 Cache. Xeons currently come

in two flavours: DP and MP. DP means ‘Dual Processing’, up to 2 processors in symmetric multiprocessing.

MP means ‘MultiProcessor’, as in more than 2, up to 8.

Advanced Micro Devices or AMD Processors

• AMD Socket-7 & K6 (Single Core, 32-bit).

• Duron and Sempron (462/754 Socket, up to 1.8 GHz, Single Core, 32-Bit, L2 Cache).

• Athlon XP/XP-M Processors.

• Athlon MP Processors.

• Athlon64 ( Single/ Dual-Core, 64-Bit, Socket 754, L2 Cache).

• Athlon64 & AthlonFX (Speeds up to 4 GHz, Socket 939, Dual Channel Memory controller).

• Opteron, OpteronMP, early AthlonFX (Socket 940).

• Phenom ( AMD Socket AM2+ quad-core processor, 64-Bit, L1, L2 & L3 Cache).

• Phenom ( Socket AM3 to be released in 2009).

The eight-core, 64-bit processor that can run as fast as 3-4GHz is the most advanced processor available

today. Quad-core 64-bit chips have been released by AMD and Intel.

1.5 Computer Software

A computer cannot do any work on its own. It depends on the logical sequence of instructions to perform any

function. This logical sequence of instructions is termed as a ‘computer program’ and it is a part of the computer

software. Basically, the sequences of instructions are the algorithms that step wise instruct the computer what to

do. Hence, a computer cannot work without software. The term ‘software’ was first used in print by John W.

Tukey in 1958.

There are various types of software designed to perform specific tasks. The different types of computer

software are interpreter, assembler, compiler, operating systems, networking, word processing, accounting,

presentation, graphics, computer games and so on. The computer software converts the instructions in a program

into a machine language so that the computer can execute it.

Computer software is developed and designed by computer software engineers on the principles of basic

mathematical analysis and logical reasoning. The software once developed is evaluated and tested before it is

implemented. Thus, the programming software allows you to develop the

desired instruction sequences, whereas in the application software the

instruction sequences are predefined. Computer software can function

from only a few instructions to millions of instructions, for example, a

word processor or a Web browser. Figure shows how software interacts

between user and computer system.

On the functional basis, software is categorized as follows:

• System Software: It helps in the proper functioning of computer

hardware. It includes device drivers, operating systems, servers

and utilities.

• Programming Software: It provides tools to help a programmer

in writing computer programs and software using various

programming languages. It includes compilers, debuggers,

interpreters, linkers, text editors and an Integrated Development Environment (IDE).

Fig. 1.6 Interaction of Software

between User and a Computer System

Page 12: Unit 1

12 Computer Organization and Operating Systems

• Application Software: It helps the end users to complete one or more specific tasks. The specific

applications include industrial automation, business software, computer games, telecommunications,

databases, educational software, medical software and military software.

Types of Software

Software can be applied in countless situations, such as in business, education, social sector and in other fields.

The only thing that is required is a defined set of procedural steps. In other words, software can be engaged in

any field which can be described in logical and related steps. Each software is designed to suit some specific

goals. These goals are data processing, information sharing, promoting communication, and so on. Software is

classified according to the range of potential applications. These classifications are listed below:

• System Software: This class of software is responsible for managing and controlling operations of a

computer system. System software is a group of programs rather than one program and is responsible for

using computer resources efficiently and effectively. Operating system, for example, is system software

which controls the hardware, manages memory and multitasking functions, and acts as an interface between

applications programs and the computer.

• Real Time Software: This class of software observes, analyzes and controls real world events as they

occur. Generally, a real time system guarantees a response to an external event within a specified period

of time. The real time software, for example, is used for navigation in which the computer must react to a

steady flow of new information without interruption. Most defence organizations all over the world use

real time software to control their military hardware.

• Business Software: This class of software is widely used in areas where the management and control of

financial activities is of utmost importance. The fundamental component of a business system comprises

payroll, inventory, accounting and software that permits user to access relevant data from the database.These activities are usually performed with the help of specialized business software that facilitates efficient

framework in the business operation and in management decisions.

• Engineering and Scientific Software: This class of software has emerged as a powerful tool to provide

help in the research and development of next generation technology. Applications, such as study of celestial

bodies, study of undersurface activities and programming of orbital path for space shuttle, are heavilydependent on engineering and scientific software. This software is designed to perform precise calculations

on complex numerical data that are obtained during real time environment.

• Artificial Intelligence Software: This class of software is used where the problem solving technique isnon-algorithmic in nature. The solutions of such problems are generally non-agreeable to computation or

straightforward analysis. Instead, these problems require specific problem solving strategies that include

expert system, pattern recognition and game playing techniques. In addition, it involves the various types ofsearching techniques including the use of heuristics. The role of artificial intelligence software is to add

certain degree of intelligence into the mechanical hardware to do the desired work in an agile manner.

• Web Based Software: This class of software acts as an interface between the user and the Internet.Data on the Internet can be in the form of text, audio or video format linked with hyperlinks. Web browser

is Web Based software that retrieves Web pages from the Internet. The software incorporates executable

instructions written in special scripting languages, such as Common Gateway Interface (CGI) or ActiveServer Page (ASP). Apart from providing navigation on the Web, this software also supports additional

features that are useful while surfing the Internet.

• Personal Computer Software: This class of software is used for official and personal use on daily basis.

The personal computer software market has grown over the last two decades from normal text editor to

word processor and from simple paint brush to advance image editing software. This software is usedpredominantly in almost every field, whether it is database management system, financial accounting

package or a multimedia based software. It has emerged as a versatile tool for daily life applications.

Page 13: Unit 1

Computer Organization and Operating Systems 13

Software can be also classified in terms of how closely software users or software purchasers are associated

with the software development.

• Commercial Off-The-Shelf or COTS: In this category comes the software for which there is no

committed user before it is put up for sale. The software users have less or no contact with the vendor

during development. It is sold through retail stores or distributed electronically. This software includescommonly used programs, such as word processors, spreadsheets, games, income tax programs, as

well as software development tools, such as software testing tools and object modelling tools.

• Customized or Bespoke: In this classification, software is developed for a specific user who is

bound by some kind of formal contract. Software developed for an aircraft, for example, is usually

done for a particular aircraft making company. They are not purchased ‘off-the-shelf’ like any wordprocessing software.

• Customized COTS: In this classification, a user can enter into a contract with the software vendor to

develop a COTS product for a special purpose, that is, software can be customized according to theneeds of the user. Another growing trend is the development of COTS software components—the

components that are purchased and used to develop new applications. The COTS software component

vendors are essentially parts stores. These are classified according to their application types. Thesetypes are listed as follows:

o Standalone Software: This class of software resides on a single computer and does notinteract with any other software installed in a different computer.

o Embedded Software: This class of software refers to the part of unique application involving

hardware like automobile controller.

o Real Time Software: Operations in this class of software are executed within very short time

limits, often microseconds e.g., radar software in air traffic control system.

o Network Software: In this class of software, software and its components interact across a

network.

System Software

They consists of all the programs, languages and documentation supplied by the manufacturer with the computer.

These programs allow the user to communicate with the computer and write or develop his own programs. This

software makes the machine easier to use and makes an efficient use of the resources of the hardware. Systems

software are programs held permanently on a machine which relieve the programmer from mundane tasks and

improve resource utilization. MS DOS or Microsoft Disk Operating System was one of the most widely used

systems software for IBM compatible microcomputers. Windows and its various versions are popular examples

of systems software today. System software are installed permanently on a computer system used for daily

routine work.

Application Software

These are software programs installed by users to perform tasks according to their specific requirements, such as

an accounting system used in a business organization or a designing program used by engineers. They also

include all the programs, languages and other utility programs. These programs enable the user to communicate

with the computer and develop other customized packages. They also enable maximum and efficient usage of the

computer hardware and other available resources.

Licensed Software

While there is a large availability of open source or free software online, not all software available in the market

is free for use. Some software falls under the category of Commercial Off-The-Shelf (COTS). COTS is a

Page 14: Unit 1

14 Computer Organization and Operating Systems

term used for software and hardware technology which is available to the general public for sale, license or lease.

In other words, to use COTS software, you must pay its developer in one way or another.

Most of the application software available in the market need a software license for use.

‘A software license is a legal instrument governing the usage or redistribution of copyright protected

software. A typical software license grants a permission to end user to use one or more copies of software in

ways where such a use would otherwise constitute infringement of the software publisher’s exclusive rights

under copyright law. In effect, the software license acts as a promise from the software publisher to not sue the

end user for engaging in activities that would normally be considered exclusive.’

Software is licensed in different categories. Some of these licenses are based on the number of unique

users of the software while other licenses are based on the number of computers on which the software can be

installed. A specific distinction between licenses would be an Organizational Software License which grants an

organization the right to distribute the software or application to a certain number of users or computers within the

organization and a Personal Software License which allows the purchaser of the application to use the software

on his or her computer only.

Free Domain Software

To understand this let us distinguish between the commonly used terms Freeware and Free Domain software.

The term ‘freeware’ has no clear accepted definition, but is commonly used for packages that permit redistribution

but not modification. This means that their source code is not available. Free Domain software is a software that

comes with permission for anyone to use, copy and distribute, either verbatim or with modifications, either free or

for a fee. In particular, this means that the source code must be available. Free Domain software can be freely

used, modified and redistributed but with one restriction: the redistributed software must be distributed with the

original terms of free use, modification and distribution. This is known as ‘copyleft’. Free software is a matter of

freedom, not price. Free software may be packaged and distributed for a fee. The ‘Free’ here refers to the ability

of reusing it – modified or unmodified, as a part of another software package. The concept of free software is the

brainchild of Richard Stallman, head of the GNU project. The best known example of free software is Linux, an

operating system that is proposed as an alternative to Windows or other proprietary operating systems. Debian is

an example of a distributor of a Linux package.

Free software should therefore not be confused with freeware which is a term used for describing software

that can be freely downloaded and used but which may contain restrictions for modification and reuse.

2. NUMBER SYSTEM

A number is an idea that is used to refer amount of things. People use number words, number gestures and

number symbols. Number words are said out loud. Number gestures are made with some part of the body, usually

the hands. Number symbols are marked or written down. A number symbol is called a numeral. The number is

the idea we think of when we see the numeral, or when we see or hear the word.

On hearing the word number, we immediately think of the familiar decimal number system with its 10

digits, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. These numerals are called Arabic numerals. Our present number system

provides modern mathematicians and scientists with great advantages over those of previous civilizations and is

an important factor in our advancement. Since fingers are the most convenient tools nature has provided, human

beings use them in counting. So, the decimal number system followed naturally from this usage.

A number of base, or radix r, is a system that uses distinct symbols of r digits. Numbers are represented by

a string of digit symbols. To determine the quantity that the number represents, it is necessary to multiply each

digit by an integer power of r and then form the sum of all the weighted digits. It is possible to use any whole

number greater than one as a base in building a numeration system. The number of digits used is always equal to

the base.

Page 15: Unit 1

Computer Organization and Operating Systems 15

There are four systems of arithmetic which are often used in digital systems. These systems are as

follows:

1. Decimal 2. Binary

3. Octal 4. Hexadecimal

In any number system, there is an ordered set of symbols known as digits. Collection of these digits makes

a number which in general has two parts, integer and fractional, set apart by a radix point (.). Hence, a number

system can be represented as,

b̂N =

1 2 3 1 0 1 2 3 –

Integer portion Fractional portion

... ...n n n ma a a a a a a a a− − − − − −⋅��������������������� ������������������

where, N = A number

b = Radix or base of the number system

n = Number of digits in integer portion

m = Number of digits in fractional portion

an – 1 = Most Significant Digit (MSD)

a– m = Least Significant Digit (LSD)

and 0 ≤ (ai or a–f) ≤ ⋅ b–1

Base or Radix: The base or radix of a number is defined as the number of different digits which can

occur in each position in the number system.

2.1 Decimal Number System

The number system which utilizes ten distinct digits, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9 is known as decimal number

system. It represents numbers in terms of groups of ten, as shown in Figure.

We would be forced to stop at 9 or to invent more symbols if it were not for the use of positional notation.

It is necessary to learn only 10 basic numbers and positional notational system in order to count any desired

figure.

Decimal Position Values as Powers of 10

The decimal number system has a base or radix of 10. Each of the ten decimal digits 0 through 9, has a

place value or weight depending on its position. The weights are units, tens, hundreds, and so on. The same can

be written as the power of its base as 100, 101, 102, 103... etc. Thus, the number 1993 represents quantity equal

to 1000 + 900 + 90 + 3. Actually, this should be written as {1 × 103 + 9 × 102 + 9 × 101 + 3 × 100}. Hence, 1993

is the sum of all digits multiplied by their weights. Each position has a value 10 times greater than the position to

its right.

For example, the number 379 actually stands for the following representation.

100 10 1

102 101 100

Page 16: Unit 1

16 Computer Organization and Operating Systems

3 7 9

3 × 100 + 7 × 10 + 9 × 1

∴ 37910 = 3 × 100 + 7 × 10 + 9 × 1

= 3 × 102 + 7 × 101 + 9 × 100

In this example, 9 is the Least Significant Digit (LSD) and 3 is the Most Significant Digit (MSD).

Example: Write the number 1936.469 using decimal representation.

Solution: 1936.46910 = 1 × 103 + 9 × 102 + 3 × 101 + 6 × 100 + 4 × 10–1

+ 6 × 10–2 + 9 × 10–3

= 1000 + 900 + 30 + 6 + 0.4 + 0.06 + 0.009 = 1936.469

It is seen that powers are numbered to the left of the decimal point starting with 0 and to the right of the

decimal point starting with –1.

The general rule for representing numbers in the decimal system by using positional notation is as follows:

anan – 1 ... a2a1a0 = an10n + an – 110n–1 + ... a2102 + a1101 + a0100

Where n is the number of digits to the left of the decimal point.

2.2 Binary Number System

A number system that uses only two digits, 0 and 1 is called the binary number system. The binary number

system is also called a base two system. The two symbols 0 and 1 are known as bits (binary digits).

The binary system groups numbers by two’s and by powers of two as shown in Figure. The word binary

comes from a Latin word meaning two at a time.

Binary Position Values as a Power of 2

The weight or place value of each position can be expressed in terms of 2 and is represented as 20, 21,22, etc. The least significant digit has a weight of 20 (= 1). The second position to the left of the least significantdigit is multiplied by 21 (= 2). The third position has a weight equal to 22 (= 4). Thus, the weights are in theascending powers of 2 or 1, 2, 4, 8, 16, 32, 64, 128, etc.

The numeral 102 (one, zero, base two) stands for two, the base of the system.

In binary counting, single digits are used for none and one. Two-digit numbers are used for 102 and 112[2 and 3 in decimal numerals]. For the next counting number, 1002 (4 in decimal numerals) three digits arenecessary. After 1112 (7 in decimal numerals) four-digit numerals are used until 11112 (15 in decimal numerals)is reached, and so on. In a binary numeral, every position has a value 2 times the value of the position to itsright.

A binary number with 4 bits, is called a nibble and a binary number with 8 bits is known as a byte.

For example, the number 10112 actually stands for the following representation:

10112 = 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20

= 1 × 8 + 0 × 4 + 1 × 2 + 1 × 1

∴ 10112 = 8 + 0 + 2 + 1 = 1110

Page 17: Unit 1

Computer Organization and Operating Systems 17

In general,

[bnbn – 1 ... b2, b1, b0]2 = bn2n + bn – 12n–1 + ... + b222 + b121 + b020

Similarly, the binary number 10101.011 can be written as follows:

1 0 1 0 1 . 0 1 1

24 23 22 21 20 . 2– 1 2– 2 2– 3

(MSD) (LSD)

∴ 10101.0112 = 1 × 24 + 0 × 23 + 1 × 22 + 0 × 21 + 1 × 20

+ 0 × 2–1 + 1 × 2–2 + 1 × 2–3

= 16 + 0 + 4 + 0 + 1 + 0 + 0.25 + 0.125 = 21.37510

In each binary digit, the value increases in powers of two starting with 0 to the left of the binary point and

decreases to the right of the binary point starting with power –1.

Why Binary Number System is used in Digital Computers?

Binary number system is used in digital computers because all electrical and electronic circuits can be made

to respond to the two states concept. A switch, for instance, can be either opened or closed, only two possible

states exist. A transistor can be made to operate either in cutoff or saturation, a magnetic tape can be either

magnetized or non magnetized, a signal can be either HIGH or LOW, a punched tape can have a hole or no

hole. In all of the above illustrations, each device is operated in any one of the two possible states and the

intermediate condition does not exist. Thus, 0 can represent one of the states and 1 can represent the other.

Hence, binary numbers are convenient to use in analysing or designing digital circuits.

2.3 Octal Number System

The octal number system was used extensively by early minicomputers. However, for both large and small

systems, it has largely been supplanted by the hexadecimal system. Sets of 3-bit binary numbers can be represented

by octal numbers and this can be conveniently used for the entire data in the computer.

A number system that uses eight digits, 0, 1, 2, 3, 4, 5, 6 and 7, is called an octal number system. It has

a base of eight. The digits, 0 through 7 have exactly the same physical meaning as decimal symbols. In this

system, each digit has a weight corresponding to its position as shown below:

an8n + ... a383 + a282 + a181 + a080 + a– 18–1 + a– 28–2 + ... + a– n8–n

Octal Odometer

Octal odometer is a hypothetical device similar to the odometer of a car. Each display wheel of this odometer

contains only eight digits (teeth), numbered 0 to 7. When a wheel turns from 7 back to 0 after one rotation, it sends

a carry to the next higher wheel. Table below shows equivalent numbers in decimal, binary and octal systems.

Table Equivalent Numbers in Decimal, Binary and Octal Systems

Decimal (Radix 10) Binary (Radix 2) Octal (Radix 8)

0 000 000 0

1 000 001 1

2 000 010 2

3 000 011 3

4 000 100 4

5 000 101 5

6 000 110 6

Page 18: Unit 1

18 Computer Organization and Operating Systems

7 000 111 7

8 001 000 10

9 001 001 11

10 001 010 12

11 001 011 13

12 001 100 14

13 001 101 15

14 001 110 16

15 001 111 17

16 010 000 20

Consider an octal number [567.3]8. It is pronounced as five, six, seven octal point three and not five hundred

sixty seven point three. The coefficients of the integer part are a0 = 7, a1 = 6, a2 = 5 and the coefficient of the

fractional part is a– 1 = 3.

2.4 Hexadecimal Number System

The hexadecimal system groups numbers by sixteen and powers of sixteen. Hexadecimal numbers are used

extensively in microprocessor work. Most minicomputers and microcomputers have their memories organized

into sets of bytes, each consisting of eight binary digits. Each byte either is used as a single entity to represent a

single alphanumeric character or broken into two 4-bit pieces. When the bytes are handled in two 4-bit pieces, the

programmer is given the option of declaring each 4-bit character as a piece of a binary number or as two BCD

numbers.

The hexadecimal number is formed from a binary number by grouping bits in groups of 4 bits each, starting

at the binary point. This is a logical way of grouping, since computer words come in 8 bits, 16 bits, 32 bits and so

on. In a group of 4 bits, the decimal numbers 0 to 15 can be represented as shown in Table.

The hexadecimal number system has a base of 16. Thus, it has 16 distinct digit symbols. It uses the digits

0, 1, 2, 3, 4, 5, 6, 7, 8 and 9 plus the letters A, B, C, D, E and F as 16 digit symbols. The relationship among octal,

hexadecimal and binary is shown in Table. Each hexadecimal number represents a group of four binary digits.

Table Equivalent Numbers in Decimal, Binary, Octal and Hexadecimal Number Systems

Decimal Binary Octal Hexadecimal

(Radix 10) (Radix 2) (Radix 8) (Radix 16)

0 0000 0 0

1 0001 1 1

2 0010 2 2

3 0011 3 3

4 0100 4 4

5 0101 5 5

6 0110 6 6

7 0111 7 7

8 1000 10 8

9 1001 11 9

10 1010 12 A

11 1011 13 B

12 1100 14 C

13 1101 15 D

14 1110 16 E

15 1111 17 F

Page 19: Unit 1

Computer Organization and Operating Systems 19

16 0001 0000 20 10

17 0001 0001 21 11

18 0001 0010 22 12

19 0001 0011 23 13

20 0001 0100 24 14

Counting in Hexadecimal

When counting in hex, each digit can be incremented from 0 to F. Once it reaches F, the next count causes it torecycle to 0 and the next higher digit is incremented. This is illustrated in the following counting sequences: 0038,0039, 003A, 003B, 003C, 003D, 003E, 003F, 0040; 06B8, 06B9, 06BA, 06BB, 06BC, 06BD, 06BE, 06BF, 06C0,06C1.

2.5 Conversion from One Number System to the Other

Binary to Decimal Conversion

A binary number can be converted into decimal number by multiplying the binary 1 or 0 by the weightcorresponding to its position and adding all the values.

Example 2: Convert the binary number 110111 to decimal number.

Solution: 1101112 = 1 × 25 + 1 × 24 + 0 × 23 + 1 × 22 + 1 × 21 + 1 × 20

= 1 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 1 × 1

= 32 + 16 + 0 + 4 + 2 + 1

= 5510

We can streamline binary to decimal conversion by the following procedure:

Step 1: Write the binary, i.e., all its bits in a row.

Step 2: Write 1, 2, 4, 8, 16, 32, ..., directly under the binary number working from right to left.

Step 3: Omit the decimal weight which lies under zero bits.

Step 4: Add the remaining weights to obtain the decimal equivalent.

The same method is used for binary fractional number.

Example 3: Convert the binary number 11101.1011 into its decimal equivalent.

Solution:

Step 1: 1 1 1 0 1 . 1 0 1 1

Binary Point

Step 2: 16 8 4 2 1 . 0.5 0.25 0.125 0.0625

Step 3: 16 8 4 0 1 . 0.5 0 0.125 0.0625

Step 4: 16 + 8 + 4 + 1 + 0.5 + 0.125 + 0.0625 = [29.6875]10

Hence, [11101.1011]2 = [29.6875]10

The abbreviation K stands for 210 = 1024. Therefore, 1K = 1024, 2K = 2048, 3K = 3072, 4K = 4096, and

so on. Many personal computers have 64K memory, this means that computers can store up to 65,536 bytes in the

memory section.

Page 20: Unit 1

20 Computer Organization and Operating Systems

Binary Numbers Powers of 2

Decimal Binary Powers of 2 Equivalent Abbreviation

0 0 20 1

1 01 21 2

2 10 22 4

3 11 23 8

4 100 24 16

5 101 25 32

6 110 26 64

7 111 27 128

8 1000 28 256

9 1001 29 512

10 1010 210 1024 1K

11 1011 211 2048 2K

12 1100 212 4096 4K

13 1101 213 8192 8K

14 1110 214 16384 16K

15 1111 215 32768 32K

16 10000 216 65536 64K

Decimal to Binary Conversion

There are several methods for converting a decimal number to a binary number. The first method is simply to

subtract values of powers of 2 which can be subtracted from the decimal number until nothing remains. The value

of the highest power of 2 is subtracted first, then the second highest, and so on.

Example 4: Convert the decimal integer 29 to the binary number system.

Solution: First the value of the highest power of 2 which can be subtracted from 29 is found. This is 24 = 16.

Then, 29 – 16 = 13

The value of the highest power of 2 which can be subtracted from 13, is 23, then 13 – 23 = 13 – 8 = 5. The

value of the highest power of 2 which can be subtracted from 5, is 22. Then 5 – 22 = 5 – 4 = 1. The remainder

after subtraction is 10 or 20. Therefore, the binary representation for 29 is given by,

2910 = 24 + 23 + 22 + 20 = 16 + 8 + 4 + 0 × 2 + 1

= 1 1 1 0 1

[29]10 = [11101]2

Similarly, [25.375]10 = 16 + 8 + 1 + 0.25 + 0.125

= 24 + 23 + 0 + 0 + 20 + 0 + 2–2 + 2–3

[25.375]10 = [11011.011]2

This is a laborious method for converting numbers. It is convenient for small numbers and can be performed

mentally, but is less used for larger numbers.

Decimal Fraction to Binary

The conversion of decimal fraction to binary fractions may be accomplished by using several techniques. Again,

the most obvious method is to subtract the highest value of the negative power of 2, which may be subtracted

Page 21: Unit 1

Computer Organization and Operating Systems 21

from the decimal fraction. Then, the next highest value of the negative power of 2 is subtracted from the remainder

of the first subtraction and this process is continued until there is no remainder or to the desired precision.

Example 5: Convert 0.694010 to a binary number.

Solution: 0.6940 × 2 = 1.3880 = 0.3880 with a carry of 1

0.3880 × 2 = 0.7760 = 0.7760 with a carry of 0

0.7760 × 2 = 1.5520 = 0.5520 with a carry of 1

0.5520 × 2 = 1.1040 = 0.1040 with a carry of 1

0.1040 × 2 = 0.2080 = 0.2080 with a carry of 0

0.2080 × 2 = 0.4160 = 0.4160 with a carry of 0

0.4160 × 2 = 0.8320 = 0.8320 with a carry of 0

0.8320 × 2 = 1.6640 = 0.6640 with a carry of 1

0.6640 × 2 = 1.3280 = 0.3280 with a carry of 1

We may stop here as the answer would be approximate.

∴ [0.6940]10 = [0.101100011]2

If more accuracy is needed, continue multiplying by 2 until you have as many digits as necessary for your

application.

Example 6: Convert 14.62510 to binary number.

Solution: First the integer part 14 is converted into binary and then, the fractional part 0.625 is converted into

binary as shown below:

Integer part Fractional part

14 ÷ 2 = 7 + 0 0.625 × 2 = 1.250 with a carry of 1

7 ÷ 2 = 3 + 1 0.250 × 2 = 0.500 with a carry of 0

3 ÷ 2 = 1 + 1 0.500 × 2 = 1.000 with a carry of 1

1 ÷ 2 = 0 + 1

∴ The binary equivalent is [1110.101]2

Octal to Decimal Conversion

An octal number can be easily converted to its decimal equivalent by multiplying each octal digit by its positional

weight.

Example 7: Convert (376)8 to decimal number.

Solution: The process is similar to binary to decimal conversion except that the base here is 8.

[376]8 = 3 × 82 + 7 × 81 + 6 × 80

= 3 × 64 + 7 × 8 + 6 × 1 = 192 + 56 + 6 = [254]10

The fractional part can be converted into decimal by multiplying it by the negative powers of 8.

Example 8: Convert (0.4051)8 to decimal number.

Solution: [0.4051]8 = 4 × 8–1 + 0 × 8–2 + 5 × 8–3 + 1 × 8–4

=1 1 1 1

4 0 5 18 64 512 4096

× + × + × + ×

∴ [0.4051]8 = [0.5100098]10

Page 22: Unit 1

22 Computer Organization and Operating Systems

Example 9: Convert (6327.45)8 to its decimal number.

Solution: [6327.45]8 = 6 × 83 + 3 × 82 + 2 × 81 + 7 × 80 + 4 × 8–1 + 5 × 8–2

= 3072 + 192 + 16 + 7 + 0.5 + 0.078125

[6327.45]8 = [3287.578125]10

Decimal to Octal Conversion

The methods used for converting a decimal number to its octal equivalent are the same as those used to convertfrom decimal to binary. To convert a decimal number to octal, we progressively divide the decimal number by 8,writing down the remainders after each division. This process is continued until zero is obtained as the quotient,the first remainder being the LSD.

The fractional part is multiplied by 8 to get a carry and a fraction. The new fraction obtained is againmultiplied by 8 to get a new carry and a new fraction. This process is continued until the number of digits havesufficient accuracy.

Example 10: Convert [416.12]10 to octal number.

Solution: Integer part 416 ÷ 8 = 52 + remainder 0 (LSD)

52 ÷ 8 = 6 + remainder 4

6 ÷ 8 = 0 + remainder 6 (MSD)

Fractional part 0.12 × 8 = 0.96 = 0.96 with a carry of 0

0.96 × 8 = 7.68 = 0.68 with a carry of 7

0.68 × 8 = 5.44 = 0.44 with a carry of 5

0.44 × 8 = 3.52 = 0.52 with a carry of 3

0.52 × 8 = 4.16 = 0.16 with a carry of 4

0.16 × 8 = 1.28 = 0.28 with a carry of 1

0.28 × 8 = 2.24 = 0.24 with a carry of 2

0.24 × 8 = 1.92 = 0.92 with a carry of 1

∴ [416.12]10 = [640.07534121]8

Octal to Binary Conversion

Since 8 is the third power of 2, we can convert each octal digit into its 3-bit binary form and from binary to octal

form. All 3-bit binary numbers are required to represent the eight octal digits of the octal form. The octal number

system is often used in digital systems, especially for input/output applications. Each octal digit that is represented

by 3 bits is shown in Table.

Octal to Binary Conversion

Octal digit Binary equivalent

0 000

1 001

2 010

3 011

4 100

Page 23: Unit 1

Computer Organization and Operating Systems 23

5 101

6 110

7 111

10 001 000

11 001 001

12 001 010

13 001 011

14 001 100

15 001 101

16 001 110

17 001 111

Example 11: Convert [675]8 to binary number.

Solution: Octal digit 6 7 5

↓ ↓ ↓

Binary 110 111 101

∴ [675]8 = [110 111 101]2

Example 12: Convert [246.71]8 to binary number.

Solution: Octal digit 2 4 6 . 7 1

↓ ↓ ↓ ↓ ↓

Binary 010 100 110 111 001

∴ [246.71]8 = [010 100 110 . 111 001]2

Binary to Octal Conversion

The simplest procedure is to use the binary triplet method. The binary digits are grouped into groups of threeon each side of the binary point with zeros added on either side if needed to complete a group of three. Then, eachgroup of 3 bits is converted to its octal equivalent. Note that the highest digit in the octal system is 7.

Example 13: Convert [11001.101011]2 to octal number.

Solution: Binary 11001.101011

Divide into groups of 3 bits 011 001 . 101 011

↓ ↓ ↓ ↓3 1 5 3

Note that a zero is added to the left-most group of the integer part. Thus, the desired octal conversion is [31.53]8.

Example 14: Convert [11101.101101]2 to octal number.

Solution: Binary [11101.101101]2

Divide into groups of 3 bits 011 101 . 101 101

↓ ↓ ↓ ↓

3 5 5 5

∴ [11101.101101]2 = [35.55]8

Page 24: Unit 1

24 Computer Organization and Operating Systems

Hexadecimal to Binary Conversion

Hexadecimal numbers can be converted into binary numbers by converting each hexadecimal digit to 4-bit binaryequivalent using the code given in Table 2.8. If the hexadecimal digit is 3, it should not be represented by 2 bits[11]2, but it should be represented by 4 bits as [0011]2.

Example 15: Convert [EC2]16 to binary number.

Solution: Hexadecimal number E C 2

↓ ↓ ↓

Binary Equivalent 1110 1100 0010

∴ [EC2]16 = [1110 1100 0010]2

Example 16: Convert [2AB.81]16 to binary number.

Solution: Hexadecimal number

2 A B . 8 1

↓ ↓ ↓ ↓ ↓

0010 1010 1011 1000 0001

∴ [2AB.81]16 = [0010 1010 1011 . 1000 0001]2

Binary to Hexadecimal Conversion

Conversion from binary to hexadecimal is easily accomplished by partitioning the binary number into groups of

four binary digits, starting from the binary point to the left and to the right. It may be necessary to add zero to the

last group, if it does not end in exactly 4 bits. Each group of 4 bits binary must be represented by its hexadecimal

equivalent.

Example 17: Convert [10011100110]2 to hexadecimal number.

Solution: Binary number [10011100110]2

Grouping the above binary number into 4-bits, we have

0100 1110 0110

Hexadecimal equivalent ↓ ↓ ↓

4 E 6

∴ [10011100110]2 = [4E6]16

Example 18: Convert [111101110111.111011]2 to hexadecimal number.

Solution: Binary number [111101110111.111011]2

By Grouping into 4 bits we have, 1111 0111 0111 . 1110 1100

↓ ↓ ↓ . ↓ ↓

Hexadecimal equivalent F 7 7 . E C

∴ [111101110111.111011]2 = [F77.EC]16

The conversion between hexadecimal and binary is done in exactly the same manner as octal and binary,

except that groups of 4 bits are used.

Hexadecimal to Decimal Conversion

As in octal, each hexadecimal number is multiplied by the powers of 16, which represents the weight according

to its position and finally adding all the values.

Page 25: Unit 1

Computer Organization and Operating Systems 25

Another way of converting a hexadecimal number into its decimal equivalent is to first convert the

hexadecimal number to binary and then convert from binary to decimal.

Example 19: Convert [B6A]16 to decimal number.

Solution: Hexadecimal number [B6A]16

[B6A]16 = B × 162 + 6 × 161 + A × 160

= 11 × 256 + 6 × 16 + 10 × 1 = 2816 + 96 + 10 = [2922]10

Example 20: Convert [2AB.8]16 to decimal number.

Solution: Hexadecimal number

[2AB.8]16 = 2 × 162 + A × 161 + B × 160 + 8 × 16–1

= 2 × 256 + 10 × 16 + 11 × 1 + 8 × 0.0625

∴ [2AB.8]16 = [683.5]10

Decimal to Hexadecimal Conversion

One way to convert from decimal to hexadecimal is the hex dabble method. The conversion is done in a similarfashion, as in the case of binary and octal, taking the factor for division and multiplication as 16.

Any decimal integer number can be converted to hex successively dividing by 16 until zero is obtained inthe quotient. The remainders can then be written from bottom to top to obtain the hexadecimal results.

The fractional part of the decimal number is converted to hexadecimal number by multiplying it by 16, andwriting down the carry and the fraction separately. This process is continued until the fraction is reduced to zeroor the required number of significant bits is obtained.

Example 21: Convert [854]10 to hexadecimal number.

Solution: 854 ÷ 16 = 53 + with a remainder of 6

53 ÷ 16 = 3 + with a remainder of 5

3 ÷ 16 = 0 + with a remainder of 3

∴ [854]10 = [356]16

Example 22: Convert [106.0664]10 to hexadecimal number.

Solution: Integer part

106 ÷ 16 = 6 + with a remainder of 10

6 ÷ 16 = 0 + with a remainder of 6

Fractional part

0.0664 × 16 = 1.0624 = 0.0624 + with a carry of 1

0.0624 × 16 = 0.9984 = 0.9984 + with a carry of 0

0.9984 × 16 = 15.9744 = 0.9744 + with a carry of 15

0.9744 × 16 = 15.5904 = 0.5904 + with a carry of 15

Fractional part [0.0664]10 = [0.10FF]16

Hexadecimal to Octal Conversion

This can be accomplished by first writing down the 4-bit binary equivalent of hexadecimal digit and then partitioningit into groups of 3 bits each. Finally, the 3-bit octal equivalent is written down.

Example 23: Convert [2AB.9]16 to octal number.

Page 26: Unit 1

26 Computer Organization and Operating Systems

Solution: Hexadecimal number 2 A B . 9

↓ ↓ ↓ ↓4 bit numbers 0010 1010 1011 . 1001

3 bit pattern 001 010 101 011 . 100 100

↓ ↓ ↓ ↓ ↓ ↓Octal number 1 2 5 3 . 4 4

∴ [2AB.9]16 = [1253.44]8

Example 24: Convert [3FC.82]16 to octal number.

Solution: Hexadecimal number 3 F C . 8 2

4 bit binary numbers 0011 1111 1100 . 1000 0010

3 bit pattern 001 111 111 100 . 100 000 100

↓ ↓ ↓ ↓ ↓ ↓ ↓Octal number 1 7 7 4 . 4 0 4

[3FC.82]16 = [1774.404]8

Notice that zeros are added to the rightmost bit in the above two examples to make them group of 3 bits.

Octal to Hexadecimal Conversion

It is the reverse of the above procedure. First the 3-bit equivalent of the octal digit is written down and partitionedinto groups of 4 bits, then the hexadecimal equivalent of that group is written down.

Example 25: Convert [16.2]8 to hexadecimal number.

Solution: Octal number 1 6 . 2

↓ ↓ ↓3 bit binary 001 110 . 010

4 bit pattern 1110 . 0100

↓ ↓Hexadecimal E . 4

∴ [16.2]8 = [E.4]16

Example 26: Convert [764.352]8 to hexadecimal number.

Solution: Octal number 7 6 4 . 3 5 2

3 bit binary 111 110 100 . 011 101 010

4 bit pattern 0001 1111 0100 . 0111 0101 000

↓ ↓ ↓ ↓ ↓ ↓Hexadecimal number 1 F 4 . 7 5 0

∴ [764.352]8 = [1F4.75]16

2.6 Floating Point Representation of Numbers

In decimal, very large and very small numbers are expressed in scientific notation as– 4.69 × 1023 and 1.601 ×

10–19. Binary numbers can also be expressed in this same notation by floating point representation. The floatingpoint representation of a number consists of two parts. The first part represents a signed, fixed point numbercalled the mantissa. The second part designates the position of the decimal (or binary) point and is called theexponent. The fixed point mantissa may be a fraction or an integer. The number of bits required to express theexponent and mantissa are determined by the accuracy desired from the computing system as well as its capabilityto handle such numbers. The decimal number + 6132.789, for example, is represented in floating point as follows:

sign0 6132789

mantissa� ��� ���

sign0 04

exponent��� ��

Page 27: Unit 1

Computer Organization and Operating Systems 27

The mantissa has a 0 in the leftmost position to denote a plus. The mantissa here is considered to be a fixed

point fraction. So, the decimal point is assumed to be at the left of the MSB. The decimal mantissa, when stored

in a register requires at least 29 flip-flops–four flip flops for each BCD digit and one for the sign. The decimal part

is not physically indicated in the register, it is only assumed to be there. The exponent contains the decimal number

+ 04 (in BCD), to indicate that the actual position of the decimal point is four decimal positions to the right of the

assumed decimal point. This representation is equivalent to the number expressed as fraction times 10 to exponent,

that is, + 0.6132789 × 10+04. Because of this analogy, the mantissa is sometimes called the fraction part.

Consider the following decimal numbers to understand floating point notation.

(i) 42300

(ii) 369.4202

(iii) 0.00385

(iv) 643.15

The above numbers can be written in floating point representation as follows:

(i) 42300 = 423 × 102

(ii) 369.4202 = 0.3694202 × 103

(iii) 0.00385 = 385 × 10–5

(iv) 643.15 = 64315 × 10–2

Here the first or the integer part is known as mantissa. The mantissa is multiplied by some power of 10 and

this power is known as the exponent.

Consider, for example, a computer that assumes integer representation for the mantissa and radix 8 for the

numbers. The octal number + 36.754 = 36754 × 8–3, in its floating point representation will look like this:

mantissa

sign

0 36754�������������

exponent

sign

1 03���������

When this number is represented in a register, in its binary coded form, the actual value of the registerbecomes

0011 110 111 101 100 1 000 011

The register needs 23 flip flops. The circuits that operate on such data must recognize the flip flopsassigned to the bits of the mantissa and exponent, and their associated signs. Note that if the exponent is increasedby one (to –2), the actual point of the mantissa is shifted to the right by 3 bits (one octal digit).

Floating point is always interpreted to represent a number in the following form:

m ×re

Only the mantissa m and the exponent e are physically represented in the register. The radix r and the radixpoint position of the mantissa are always assumed. A floating point binary number is represented in a similarmanner except that the radix assumed is 2. The number + 1001.11, for example, is represented in a 16 bit registeras follows:

sign0 100111000

mantissa� ���� ����

sign0 00100

exponent� ��� ���

The mantissa occupies 10 bits and the exponent 6 bits. The mantissa is assumed to be a fixed pointrepresentation. If the mantissa is assumed to be an integer, the exponent will be 1 00101 (–5).

Page 28: Unit 1

28 Computer Organization and Operating Systems

A floating point number is said to be normalized if the most significant position of the mantissa contains anonzero digit. The mantissa 035, for example, is not normalized but 350 is. When 350 is represented in BCD, itbecomes 0011 0101 0000 and although two 0s seem to be present in the two most significant positions, themantissa is normalized. Since the bits represent a decimal number, not a binary number, and decimal numbers inBCD must be taken in groups of 4 bits, the first digit is 3 and is nonzero.

When the mantissa is normalized, it has no leading zeros and therefore contains the maximum possiblenumber of significant digits. Consider, for example, a register that can accommodate a mantissa of five decimaldigits and a sign.

The number + 0.35748 × 102 = 35.748 is normalized because the mantissa has a nonzero digit 3 in its mostsignificant position. This number can be represented in an unnormalized form as + 0.00357 × 104 = 35.7. Thisunnormalized number contains two most significant zeros and therefore the mantissa can accommodate onlythree significant digits. The two least significant digits, 4 and 8, that were accommodated in the normalized form,have no form in the unnormalized form because the rgister can only accommodate five digits.

Arithmetic operations with floating point numbers are more complicated than arithmetic operations with

fixed point numbers and their execution takes longer and requires more complex hardware. However, floating

point representation is a must for scientific computations because of the scaling problems involved with fixed

point computations. Many computers and all electronic calculators have built-in capability of performing floating

point arithmetic operations. Computers that do not have hardware for floating point computations have a set of

subroutines to help the user program his scientific problems with floating point numbers.

Example 27: Determine the number of bits required to represent in floating point notation the exponent for

decimal numbers in the range of 10+86.

Solution: Let n be the required number of bits to represent the number 10+86.

∴ 2n = 1086

n log 2 = 86

∴ n = 86/log 2 = 86

0 3010. = 285.7

∴ 10±86 = 2±285.7.

The exponent ± 285 can be represented by a 10 bit binary word. It has a range of exponent (+ 511 to– 512).

2.7 Binary Arithmetic

Arithmetic operations are done in computer not by using decimal numbers, as we do normally, but by using binarynumbers. Arithmetic circuits in computers and calculators perform arithmetic and logic operations. All arithmeticoperations take place in the arithmetic unit of a computer. The electronic circuit is capable of doing addition oftwo or three binary digits at a time and the binary addition alone is sufficient to do subtraction. Thus, a singlecircuit of a binary adder with suitable shift register can perform all the arithmetic operations.

Binary Addition

Binary addition is performed in the same manner as decimal addition. Binary addition is the key to binary subtraction,multiplication and division. There are only four cases that occur in adding the two binary digits in any position.This is shown in Table 1.12.

(i) 1 + 1 + 1 = 11 (i.e., 1 carry of 1 into next position)

(ii) 1 + 1 + 1 + 1 = 100

(iii) 10 + 1 = 11

The rules of (1), (2) and (3) in Table are just decimal addition. The rule (4) states that adding 1 and 1 givesone zero (meaning decimal 2 and not decimal 10).

Page 29: Unit 1

Computer Organization and Operating Systems 29

There is a carry from the previous position. ‘Carry overs’ are performed in the same manner as in decimalarithmetic. Since 1 is the larger digit in the binary system, any sum greater than 1 requires that a digit be carriedout.

Binary Addition

Sl. No. Augend Addend Carry Sum Result

(A) + (B) (C) (S)

1 0 + 0 0 0 0

2 0 + 1 0 1 1

3 1 + 0 0 1 1

4 1 + 1 1 0 10

Example 28: Add the binary numbers (i) 011 and 101, (ii) 1011 and 1110, (iii) 10.001 and 11.110, (iv) 1111 and

10010, and (v) 11.01 and 101.0111.

Solution: (i) Binary number Equivalent decimal number

11 ← Carry

011 3

+ 101 5

Sum = 1000 8

(ii) Binary Decimal (iii) Binary Decimal

11 ← Carry 1 ← Carry

1011 11 10.001 2.125

+ 1110 + 14 + 11.110 + 3.750

Sum = 11001 25 Sum = 101.111 5.875

(iv) Binary Decimal (v) Binary Decimal

11 ← Carry 11 ← Carry

1111 15 11.01 3.25

+ 10010 + 18 101.0111 + 5.4375

Sum = 100001 33 Sum = 1000.1011 8.6875

Since the circuit in all digital systems actually performs addition that can handle only two numbers at atime, it is not necessary to consider the addition of more than two binary numbers. When more than two numbersare to be added, the first two are added together and then their sum is added to the third number, and so on.Almost all modern digital machines can perform addition operation in less than 1 µs.

Larger Binary Numbers

Column by column addition applies to binary as well as decimal numbers.

Example 29: Add the following binary numbers.

(i) 1101101 and 1001110 (ii) 1111001 and 1100101

(iii) 110011 and 111000 (iv) 1111110 and 11100111

Solution: 1 1 1 carry 1 1 1 carry

1 1 0 1 1 0 1 1 1 1 1 0 0 1

Page 30: Unit 1

30 Computer Organization and Operating Systems

(i) 1 0 0 1 1 1 0 (ii) 1 1 0 0 1 0 1

1 0 1 1 1 0 1 1 1 1 0 1 1 1 1 0

1 carry 1 1 1 1 1 1 carry

1 1 0 0 1 1 0 1 1 1 1 1 1 0

(iii) 1 1 1 0 0 0 (iv) 1 1 1 0 0 1 1 1

1 1 0 1 0 1 1 1 0 1 1 0 0 1 0 1

Example 30: Add these 8-bit numbers : 0110 1011 and 1011 0110. Then, show the same numbers in hexadecimal

notation.

Solution: 8 bit binary Hexadecimal equivalent

1111 11 carry

0110 1011 6 B H

+ 1011 0110 + B 6 H

10010 0001 1 2 1 H

Logic equations representing the sum is also known as the exclusive OR function and can be represented

also in Boolean ring algebra as S = AB BA+ = A ⊕ B.

Binary Subtraction

Subtraction is the inverse operation of addition. To subtract, it is necessary to establish procedure for subtractinga large digit from a small digit. The only case in which this occurs with binary numbers is when 1 is subtractedfrom 0. The remainder is 1, but it is necessary to borrow 1 from the next column to the left. The rules of binarysubtraction are shown below in Table 1.13.

(i) 0 – 0 = 0

(ii) 1 – 0 = 1

(iii) 1 – 1 = 0

(iv) 0 – 1 = 0 with a borrow of 1

(v) 10 – 1 = 01

Table 1.13 Binary Subtraction

Sl. No. Minuend _ Subtrahend Result

A B

1 0 – 0 0

2 0 – 1 0 with a borrow of 1

3 1 – 0 1

4 1 – 1 0

Example 31: (i) Binary Decimal (ii) Binary Decimal

Solution: 1001 9 10000 16

– 101 – 5 – 011 –3

Difference = 100 4 1101 13

Page 31: Unit 1

Computer Organization and Operating Systems 31

(iii) Binary Decimal (iv) Binary Decimal

110.01 6.25 1101 13

– 100.1 – 4.5 – 1010 – 10

1.11 1.75 0011 3

Example 32: Show the binary subtraction of (128)10 from (210)10.

Solution: Converting the given decimal numbers into corresponding hexadecimal number, we have

210 → D 2 H → 1101 0010

128 → 8 0 H → 1000 0000

1101 0010 D 2 H

– 1000 0000 – 8 0 H

0101 0010 5 2 H

Binary Multiplication

The multiplication of binary numbers is done in the same manner as the multiplication of decimal numbers. Thefollowing are four basic rules for multiplying binary digits:

(i) 0 × 0 = 0 (ii) 0 × 1 = 0 (iii) 1 × 0 = 0 (iv) 1 × 1 =1

In a computer, the multiplication operation is performed by repeated additions, in much the same manner asthe addition of all partial products to obtain the full product. Since the multiplier digits are either 0 or 1, so we arealways multiplying by 0 or 1 and no other digit.

Example 33: Multiply the binary numbers 1011 and 1101.

Solution: 1011←Multiplicant = 1110

×1011 ← Multiplier = × 1310

14310

1011

0000 ← Partial product = 14310

1011

1011

10001111 ← Final product = 14310

Typical 8 bit microprocessor 6502 is used in software multiplication. In other words, multiplication is done

with addition instructions.

2.8 1’s and 2’s Complements

Subtraction of a number from another can be accomplished by adding the complement of the subtrahend to the minuend.The exact difference can be obtained with minor manipulations.

1’s Complement

The 1’s complement form of any binary number is obtained simply by changing each 0 in the number to a 1 andeach 1 in the number to a 0.

Binary number 1’s complement

1011 → 0100

110110 → 001001

1100 1011 → 0011 0100

1011 1010 1011 1001 → 0100 0101 0100 0110

Page 32: Unit 1

32 Computer Organization and Operating Systems

1’s Complement Arithmetic

(a) Subtrahend is smaller than the minuend

1. Complement the subtrahend by converting all 1’s to 0’s and all 0’s to 1’s.

2. Proceed as in addition.

3. Disregard the carry and add 1 to the total (end-around-carry).

Example 34: Perform the subtractions using 1’s complement addition of the following binary numbers:

(i) 110010 (ii) 111001010 (iii) 11010101

– 101101 – 110110101 – 10101100

Solution: (i) 110010 ⇒ 110010 1s of

101101

– 101101 + 010010

1000100 end-around-carry

1

000101

(ii) 111001010 ⇒111001010 1’s of

110110101

– 110110101 + 001001010

1000010100 end-around-carry

1

00010101

(iii) 11010101 ⇒11010101 1’s of

10101100

– 10101100 + 01010011

100101000 end-around-carry

1

00101001

(b) Subtrahend is larger than the minuend

1. Complement the subtrahend.

2. Proceed as in addition.

3. Complement the result and place a negative sign in front of the result.

Example 35: Perform the subtractions using 1’s complement of the following binary numbers:

(i) 1011010 (ii) 1101011 (iii) 11110011

– 1101010 – 1110101 – 11111010

Solution: (i) 1011010 ⇒ 1011010 1’s of

1101010

– 1101010 + 0010101

1101111

1’s complement of 1101111 = – 0010000

(ii) 1101011 ⇒ 1101011 1’s of

1110101

– 1110101 + 0001010

1110101

1’s complement of 1110101 = – 0001010

(iii) 11110011 ⇒ 11110011 1’s of

11111010

– 11111010 + 00000101

11111000

1’s complement of 11111000 = – 00000111

Page 33: Unit 1

Computer Organization and Operating Systems 33

2’s Complement Subtraction

(a) Subtrahend is smaller than the minuend

1. Determine the 2’s complement of the smaller number.

2. Add this to the larger number.

3. Disregard the carry.

Example 36: Subtract the following using 2’s complement method: (i) (1011)2 from (1100)2 (ii) (1001)2 10012from 11012 (iii) 01012 from 10012.

Solution: (i) Direct subtraction 2’s complement method

1100 1100

– 1011 + 0101 ← [2’s complement of 1011]

0001 Carry → 1 0001

∴ The carry is disregarded. Thus, the answer is (0001)2.

(ii) Direct subtraction 2’s complement method

1101 1101

– 1011 0111 ← 2’s complement of 1001

0100 Carry → 1 0101

∴ The carry is disregarded. Thus, the answer is (0100)2.

(iii) Direct subtraction 2’s complement method

1001 1001

– 0101 + 1011 ← 2’s complement of 0101

0100 Carry → 1 0100

∴ The carry is disregarded. Thus, the answer is (0100)2.

(b) Subtrahend is larger than the minuend

Example 37: Subtract the following using 2’s complement method: (i) (1011)2 from (1101)2, (ii) (1100)2 from

(1000)2.

Solution: (i) Direct subtraction 2’s complement method

1001 1001

– 1011 + 0101 ← 2’s complement of 1011

0010 No carry → 1110

No carry is obtained. Thus, the difference is negative and the true answer is 2’s complement of (1110)2,

i.e., (0010)2.

(ii) Direct subtraction 2’s complement method

1000 1000

– 1100 + 0100 ← 2’s complement of 1100

0100 No carry → 1100

Page 34: Unit 1

34 Computer Organization and Operating Systems

Since no carry is obtained, the difference is negative and therefore the true answer is the 2’s complementof (1100)2, i.e., (0100)2.

2’s Complement Addition

There are four possible cases:

1. Both numbers positive

2. A positive number and a smaller negative number

3. A negative number and a smaller positive number

4. Both numbers negative

Binary Odometer Representation in 2’s Complement

The binary odometer is a marvellous way to understand 2’s complement representation. There are two

important ideas to notice about these odometer readings: (i) The MSB is the sign bit : 0 for a + sign and 1 for a

– sign (ii) The negative numbers shown in Figure represents the 2’s complements of the positive numbers.

Except for the magnitude, the positive and negative numbers are 2’s complements of each other. Hence,

we can take the 2’s complement of a positive binary number to find the corresponding negative binary number.

Case 1: Two positive numbers

Consider the addition of + 29 and + 19.

+ 29 0001 1101 (augend)

Adding + 19 0001 0011 (addend)

+ 48 0011 0000 (sum = 48)

Case 2: Positive and smaller negative number

Consider the addition of +39 and –22, remembering that the –22 will be in its 2’s complement form. Thus +22

(0001 0110) must be converted to –22 (1110 1010).

+ 39 0010 0111

Adding – 22 1110 1010

17 1 0001 0001

This carry is disregarded, so the result is 10001.

In this case, the sign bit of addend is 1. The sign bits also participate in the process of addition. In fact acarry is generated in the last position of addition. This carry is always disregarded.

Case 3: Positive and larger negative number

Consider the addition of –47 and +29.

– 47 1101 0001

Adding + 29 0001 1101

– 18 1110 1110

The result has a sign bit of 1, indicating a negative number. It is in 2’s complement form. The last seven bits

110 1110, naturally represent the 2’s complement of the sum. To find the true magnitude of the sum, we must take

the 2’s complement of 1110 1110, the result is 10010 (+ 18). Thus, 1110 1110 represents – 18.

Page 35: Unit 1

Computer Organization and Operating Systems 35

Case 4: Two negative numbers

Consider the addition of – 32 and – 44.

– 32 1110 0000 (augend)

Adding – 44 1101 0100 (addend)

– 76 11011 0100 (sum = – 76)

↑This carry is disregarded, so the result is 1011 0100.

2.9 Information Representation and Codes

Information or data representation refers to the methods used internally to represent information stored in a

computer. Computers store different types of information, such as numbers, text, graphics of many varieties

(stills, video, animation) and sound. The information of various types is stored in a computer are stored internally

in the simple binary format, i.e., a sequence of 0’s and 1’s. These 0’s and 1’s form bit and byte of a computer

word.

Bit and Byte

The terms bit and byte are frequently used in computer terminology. BIT word is derived from ‘BInary digiT’

and is referred as the smallest piece of information used by the computer. The term ‘byte’ was coined by Dr.

Werner Buchholz in July 1956 during the early design phase for the IBM Stretch computer. The byte is made up

of sequence of eight bits and is referred as the unit of digital information in computing and telecommunications.

A bit is a single numeric value either ‘1’ or ‘0’ that encodes a single unit of digital information. For example, in

Internet Protocol (IP), the IP addresses contain 32 bits or 4 bytes. The bytes divide the bits into groups. The IP

address 192.168.0.1 is encoded with the following bits and bytes:

11000000 10101000 00000000 00000001

Bits are grouped into bytes to increase the efficiency of computer hardware including network equipment,

disks and memory. Traditionally, a byte referred to the number of bits used to encode a single character of text in

a computer and hence it is considered as the basic addressable element in many computer architectures.

The size of the byte has historically been hardware dependent and no definitive standards exist that

mandate the size. The de facto standard of eight bits is a convenient power of two permitting the values 0

through 255 for one byte. The commercial computing architectures are based on the 8-bit size.

Alpha Numeric Codes

Character Representation

Binary data is not the only data handled by a computer. We also need to process alphanumeric data like alphabets

(upper and lower case), digits (0 to 9) and special characters like + – * / ( ) space or blank, etc. These also must

be internally represented as bits.BCD Equivalent of Decimal Digits

Decimal Number

0

1

2

3

4

5

6

7

8

9

Binary Equivalent

0000

0001

0010

0011

0100

0101

0110

0111

1000

1001

Page 36: Unit 1

36 Computer Organization and Operating Systems

Binary Coded Decimal

Binary Coded Decimal (BCD) is one of the early memory codes. It is based on the concept of converting each

digit of a decimal number into its binary equivalent rather than converting the entire decimal value into a pure

binary form. It further uses 4 digits to represent each of the digits. Table 1.4 shows the BCD equivalent of the

decimal digits.

Converting 4210

into its BCD equivalent, would result in:

4210

= 0100 0010

or 01000010 in BCD4 2

As seen, 4-bit BCD code can be used to represent decimal numbers only. Since 4 bits are insufficient to

represent the various other characters used by the computer, instead of using only 4-bits (giving 16 possible

combinations), computer designers commonly use 6 bits to represent characters in BCD code. In this, the 4 BCD

numeric place positions are retained but two additional zone positions are added. With 6 bits, it is possible to

represent 26 or 64 different characters. This is, therefore, sufficient to represent the decimal digits (10), alphabetic

characters (26) and special characters (28).

Extended Binary Coded Decimal Interchange Code

The major drawback with the BCD code is that it allows only 64 different characters to be represented. This is

not sufficient to provide for decimal numbers (10), lower-case letters (26), upper-case letters (26), and a fairly

large number of special characters (28 plus).

The BCD code was, therefore, extended from a 6-bit to an 8-bit code. The added 2 bits are used as

additional zone bits, expanding the zone bits to 4. This resulting code is called the Extended Binary Coded

Decimal Interchange Code (EBCDIC). Using the EBCDIC it is possible to represent 28 or 256 characters. This

takes care of the character requirement along with a large quantity of printable and several non-printable control

characters (movement of the cursor on the screen, vertical spacing on printer, and so on.

Since EBCDIC is an 8-bit code, it can easily be divided into two 4-bit groups. Each of these groups can be

represented by one hexadecimal digit. Thus, hexadecimal number system is used as a notation for memory dump

by computers that use EBCDIC for internal representation of characters.

Developed by IBM, EBCDIC code is used in most IBM models and many other computers.

American Standard Code for Information Interchange

A computer code that is very widely used for data interchange is called the ‘American Standard Code for

Information Interchange’ or ASCII. Several computer manufacturers have adopted it as their computers’ internal

code. ASCII code uses 7 digits to represent 128 characters. Now an advanced ASCII is used having 8-bit

character representation code allowing for 256 different characters. This representation is being used in micro-

computers.

Let us look at the encoding method. Table below shows the bit combinations required for each character.

Bit Combinations

R

00

10

20

30

40

50

60

70

00

NUL

DLE

0

@

P

'

p

01

SOH

DC1

!

1

A

Q

a

q

02

STX

DC2

"

2

B

R

b

r

03

ETX

DC3

#

3

C

S

c

s

04

EOT

DC4

$

4

D

T

d

t

05

ENQ

NAK

%

5

E

U

e

u

06

ACK

SYN

&

6

F

V

f

v

07

BEL

ETB

7

G

W

g

w

08

BS

CAN

(

8

H

X

h

x

09

TAB

EM

)

9

I

Y

i

y

0A

LF

SUB

*

:

J

Z

j

z

0B

VT

ESC

+

;

K

[

k

{

0C

FF

FS

,

<

L

\

l

|

0D

CR

GS

-

=

M

]

m

}

0E

SO

RS

.

>

N

^

n

~

0F

SI

US

/

?

O

_

o

DEL

Page 37: Unit 1

Computer Organization and Operating Systems 37

Thus, to code a text string ‘Hello.’ in ASCII using hexadecimal digits:

H e l l o .

48 65 6C 6C 6F 2E

The string is represented by the byte sequence 48 65 6C 6C 6F 2E.

2.10 Error Detection Codes

During the process of binary data transmission, errors may occur. If a single error transforms a valid code word

into an invalid one, it is said to be a single error detecting code. The most simple and commonly used error

detecting method is the parity check, in which an extra parity bit is included with the binary message, to make the

total number of 1s either odd or even, resulting in two methods, viz. (i) Even-parity method and (ii) Odd-parity

method. In the even-parity method, the total number of 1s in the code group (including the parity bit) must be an

even number. Similarly, in the odd-parity method, the total number of 1s (including the parity bit) must be an odd

number. The parity bit can be placed at either end of the code word, such that the receiver should be able to

differentiate between the parity bit and the actual data.

Parity-bit Generation

Message Even-parity code Odd-parity code

xyz xyz p xyz p

000 000 0 000 1

001 001 1 001 0

010 010 1 010 0

011 011 0 011 1

100 100 1 100 0

101 101 0 101 1

110 110 0 110 1

111 111 1 111 0

If a single error occurs, it transforms the valid code into an invalid one. This helps in the detection of single

bit errors. Though the parity code is meant for single error detection, it can detect any odd number of errors.

However, in both the cases, the original code word can not be found. If an even number of errors occur, then the

parity check is satisfied, giving an erroneous result.

Check Sums

The parity method can detect only a single error within a word and not double errors. Since the double error will

not change the parity of the bits, the parity checker will not indicate any error. The check sum method is used to

detect double errors and pinpoint erroneous bits. The working of this method is explained in the following lines.

Initially, word A 10110111 is transmitted, next the word B 00100010 is transmitted. The binary digits in the

two words are added and the sum obtained is retained in the transmitter. Then, a word C is transmitted and added

to the previous sum and the new sum is retained. In the same manner, each word is added to the previous sum and

after transmission of all the words, the final sum, called the check sum, is also transmitted. The same operation is

done at the receiving end and the final sum obtained here is checked against the transmitted check sum. If the

two sums are equal, then there is no error.

2.11 Building Blocks of Computers

The first computers did not have operating systems and they simply ran standalone programs. By the mid 1960s,

computer vendors were providing tools for developing, scheduling and executing jobs in a batch processing mode.

The operating systems originally used in mainframes and later in microcomputers only supported one

program at a time and therefore required a very basic scheduler. Each program was in complete control of the

Page 38: Unit 1

38 Computer Organization and Operating Systems

computer while it ran. Multitasking or time sharing came to be associated with mainframe computers in the

1960s. By 1970, minicomputers would be sold with proprietary operating systems. An Operating System (OS)

can be defined as ‘Software responsible for the direct control and management of hardware and basic system

operations’.

Operating Systems

Operating systems hide the idiosyncrasies of hardware by providing abstractions for ease of use. Abstraction

hides the low level details of the hardware and provides high level user friendly functions to use a hardware

piece. For example, consider using a hard disk which records information in terms of magnetic polarities. A hard

disk consists of many cylinders and tracks and sectors within a track and the read/write heads for writing and

reading bits to a sector. A user of a computer system cannot convert the data to be recorded to a format needed

for the disk and issue low level commands to address the appropriate track and read/write head and write the

data on to the right sector. There are software programs called device drivers for every device types to handle

this task. The programmer just issues read/write commands through the OS and the OS passes it to the driver

which translates this command to the low level commands that a hard disk can understand. The OS may issue the

same command for reading/writing to a tape drive or to a printer. However, the device drivers translate it into the

proper low level commands understood by the addressed device. The same is true for any other electromechanical

and electronic devices connected to a computer.

An application program wants to use the functionalities built into an operating system as parts of its code

while executing it. The OS provides these functionalities to other programs through the use of software interrupts

or system calls. Library functions are provided for all standard Input/Output (IO) operations. A programmer can

just reuse it so that he can concentrate on the coding of major logical aspects of the problem at hand.

Efficient Allocation and Utilization of Resources

A computer system has various resources like the Central Processing Unit (CPU), memory and I/O devices.

Every use of the resources is controlled by the OS. Executing programs or processes may request for the use of

resources and the OS sanctions the request and allocates the requested resources if available. The allocated

resources will be taken back from the processes normally after the use of the same is completed. That is, the OSis considered as the manager of the resources of a computer system. The allocation of resources must be donelooking for overall efficiency and utilization. That is, the allocation must be done to improve the system throughput(number of jobs executed per unit time) and the use of resources in a fair manner.

Hardware Components of Computer Systems

We understand from the discussion above that the operating systems control the hardware components of computers.So, let us look at the major hardware components of computers. The diagram shown in Figure depicts the fourmajor hardware components of a computer system. They are processor or CPU, I/O devices, memory and

buses. Let us see a bit more details of each these components.Busses

CPU Memory I/ODevices

Hardware Components of a Computer System

Page 39: Unit 1

Computer Organization and Operating Systems 39

3. PROGRAMMING LANGUAGES

A computer language is a language that can be understood by the computer. It is the computer’s native language.Computer languages serve the same purpose as human languages. They are a means of communication. Let uslook at the similarities and differences between computer languages and human languages.

A natural or human language is the language that people speak daily, such as English, Hindi, French orGerman. It is made up of words and rules known as lexicon and syntax, respectively. These words are joined tomake meaningful phrases according to the rules of the syntax. A computer language also consists of lexicon andsyntax, i.e., characters, symbols, and rules of usage that allow the user to communicate with the computer.

The main difference between a natural language and a computer language is that natural languages havea large set of words (vocabulary) to choose from while computer languages have a limited or restricted set ofwords. Thus, fewer words but more rules characterize a computer language.

Each and every problem to be solved by the computer needs to be broken down into discrete logical stepsbefore the computer can execute it. The process of writing such instructions in a computer or programminglanguage is called programming or coding.

Just as computer hardware has improved over the years, programming languages have also moved frommachine-oriented languages (that used strings of binary 0s and 1s) to problem-oriented languages (that usecommon English terms). All computer languages can, however, be classified under the following categories:

• Machine Language (First Generation Language)

• Assembly Language (Second Generation Language)

• High-Level Language (Third Generation Language)

3.1 Machine Language

The computer can understand only binary-based languages. As already discussed, this is a combination of 0s and

1s. Instructions written using sequences of 0s and 1s are known as machine language. First generation computersused programs written in machine language.

Machine language is very cumbersome to use and is tedious and time consuming for the programmer. Itrequires thousands of machine language instructions to perform even simple jobs like keeping track of a fewaddresses for mailing lists.

Every instruction in machine language is composed of two parts – the command itself, also known as the‘operation code’ or opcode (like add, multiply, move etc.), and the ‘operand’ which is the address of the data that

has to be acted upon; for example, a typical machine language instruction may be represented as follows:

OP Code Operand

001 010001110

Machine Language Instruction

The number of operands varies with each computer and is therefore computer dependent.

It is evident from the above that to program in machine language, the programmer needs informationabout the internal structure of the computer. He will also need to remember a number of operation codes andwill need to keep track of the addresses of all the data items (i.e., which storage location has which data item).Programming in machine language can be very tedious, time consuming and still highly prone to errors. Further,locating such errors and effecting modifications is also a mammoth task. Quite understandably, programmers

felt a need for moving away from machine language.

Page 40: Unit 1

40 Computer Organization and Operating Systems

3.2 Assembly Language

Assembly language was the first step in the evolution of programming languages. It used mnemonics (symboliccodes) to represent operation codes and strings of characters to represent addresses. Instructions in assemblylanguage may look like this:

Operation Operation address

READ M

ADD L

Assembly Language Instruction

Assembly language was designed to replace each machine code by an understandable mnemonic, andeach address with a simple alphanumeric string. It was matched to the processor-structure of a particular computerand was therefore (once again) machine dependent. This meant that programs written for a particular computermodel could not be executed on another one. In other words, an assembly language program lacked portability.

A program written in assembly language needs to be translated into machine language before the computercan execute it. This is done by a special program called ‘Assembler’, which takes every assembly languageprogram and translates it into its equivalent machine code. The assembly language program is called the sourceprogram, while the equivalent machine language program is called the object program. The assembler is a systemprogram supplied by the computer manufacturer.

Second generation computers used assembly language.

3.3 High-Level Languages

High-level languages developed as a result of the lack of portability of programs (written using machine or

assembly language) from one computer to another. They derive their name from the fact that they pemit programmersto disregard a number of minor (low-level) hardware related details. Also, it is apparent that the closer themnemonics, rules and syntax of programming language could be to ‘natural language’, the easier it would be forprogrammers to program, and less would be the possibility of introducing ‘bugs’ or errors into the program.Hence, third generation languages came into being in the mid-1950s.

These procedural or algorithmic languages are designed to solve particular types of problems. They containcommands that are particularly suited to one type of application. For example, a number of languages have beendesigned to process scientific or mathematical problems. Others place an emphasis on commercial applications,and so on.

Unlike symbolic or machine languages, there is very little variation in these languages between computers.But it is necessary to translate them into machine code using a program known as an interpreter or a compiler.Once again, the high-level program is called the source code while its equivalent machine language program isreferred to as the object code.

As they are easy to learn and program, machine independent, easy to maintainportable, high level languagesare very popular. Slow program execution is the main disadvantage since programs need to be converted intomachine language (by an interpreter or a compiler) before they can be executed.

Interpreter vs Compiler

Programs written in high-level languages need to be converted into machine language before the computer canexecute them. ‘Interpreters’ and ‘Compilers’ are translation or conversion programs that produce the machinecode from high-level languages.

The original program is called a source program, and after it is translated by the interpreter or compiler, itis called an object program.

Page 41: Unit 1

Computer Organization and Operating Systems 41

The interpreter and compiler perform the same function but in fundamentally different ways. An interpretertranslates the instructions of the program one statement at a time. This translated code is first executed beforethe interpreter begins work on the next line. Thus instructions are translated and executed simultaneously. Theobject code is not stored in the computer’s memory for future use. The next time the instruction is to be used, itneeds to be freshly translated by the interpreter. For example, during repetitive processing of instructions in a

loop, every instruction in the loop will need to be translated every time the loop is executed.

Output Result ofProgramExecution

Interpreter(Translates and ExecutesOne Statement at a Time)

Input Program inHigh-levelLanguage

(Source Program)

Translation Process using an Interpreter

A compiler, on the other hand, takes an entire high-level language program and produces a machinecode version out of it. This version is then run as a single program.

Generally, language statements are written by programmers in languages such as C or COBOL usingeditors. The source statements are contained within the file that thus gets created. Following this, the suitablelanguage compiler is run by the programmer, who specifies the file name within which the source statements arecontained. These statements are converted into their corresponding machine code, which can then be executedby the computer.

The object code can be stored in the computer’s memory for executing in future. A compiler does not need

to translate the source program every time it needs to be executed, thereby saving execution time.

Output Programin MachineLanguage

(Object Program)

Program inHigh-levelLanguage

(Source Program)

Input Compiler

Translation Process using a Compiler

To summarize, an interpreter allows the programmer to have online interaction with the computer, i.e., theprogram can be corrected/modified while it is running, which means that it is possible to debug the program as itis being executed. This however results in slower execution speed. On the contrary, a compiler permits offlineinteraction, i.e., it is not possible to make changes while the program is running. The source program will need tobe modified offline and compiled every time a change is made, however minor the change may be. This can bequite frustrating for new programmers but is good for those needing fast execution speed.

Third-Generation Languages

FORTRAN: Early computers were almost exclusively used by scientists. The first high-level language, FORTRAN(FORmula TRANslation) was developed in about 1956 by John Backus at IBM. This language was designed forsolving engineering and scientific problems and is still one of the most popular languages for such applications.

FORTRAN has a number of versions with FORTRAN IV being one of the earlier popular versions. In1977 the American National Standards Institute (ANSI) published standards for FORTRAN with the view tostandardize the form of the language used by manufacturers. This standardized version is called FORTRAN 77.

COBOL: COBOL (COmmon Business Oriented Language), the initial language designed for commercial

applications, was developed in 1959 under the direction of Grace Hopper (a programmer in the US Navy) by a

team of computer users and manufacturers. The maintenance and further growth of the language was handed

over to a group called CODASYL (COnference on DAta SYstems Languages).

It is written using statements that resemble simple English and can be understood easily. For example, to

add two numbers (stored in variables A and B) a simple statement in COBOL would be: ADD A TO B GIVING

C.

Page 42: Unit 1

42 Computer Organization and Operating Systems

COBOL was standardized by ANSI in 1968 and in 1974. COBOL became the most widely used programming

language for business and data processing applications.

BASIC: BASIC (Beginner’s All-purpose Symbolic Instruction Code) was developed in 1966 by Thomas

Kurtz and John Kemeny, two professors at Dartmouth College, as a means of instruction for undergraduates.

This language was primarily designed for beginners and is very easy to learn. It was immediately picked up for

most business and general-purpose applications, particularly on small computers. BASIC later ended up as the

chief language at the centre of the personal computer revolution.

A minimum version of BASIC was standardized by ANSI and is so simple that it has been incorporated in

every subsequent version of BASIC. Some versions of BASIC include MBASIC (Microsoft BASIC) and CBASIC

(Compiler-based BASIC).

One of the newer versions of BASIC, commonly known as Visual Basic has also evolved from the original

BASIC language. It contains various statements and functions that can be used to create applications for a

Windows or GUI environment.

PASCAL: PASCAL was designed by Nicholas Wirth, a Swiss professor, in 1971. He created a far more

structured language for teaching and christened it Pascal (after Blaise Pascal, the French mathematician who

built the first successful mechanical calculator). His primary aim was to provide a language that supported

beginners in learning good program solving and programming techniques.

In addition to manipulation of numbers, PASCAL supports manipulation of vectors, matrices, strings of

characters, records, files and lists, thereby supporting non-numeric programming. Hence it has proved to be an

attractive language for professional computer scientists.

PASCAL has been standardized by ISO (International Standards Organization) and ANSI.

ADA: ADA (named after Charles Babbage’s biographer, the countess of Lovelace, Ada Augusta) was

developed by the US Department of Defence in 1981 and PL/1 (Programming Language 1) was developed by

IBM in the late 1960s. Both these languages were developed with both a scientific and business use in mind.

LISP: LISP (LISt Processing) was developed in the early 1950s by John McCarthy of the Massachusetts

Institute of Technology. It was put into practice in 1959, and was better equipped to deal with recursive algorithms

It has become the standard language within the artificial intelligence community.

C and C ++: The C language was developed by Dennis Ritchie of Bell Laboratories for implementing the

UNIX operating system. An extension of it was developed by Bjarne Stroustrup of Bell Laboratories, that he

called C++. C++ is also used to write procedural programs like C but the reason for its increased popularity is

perhaps because it is capable of handling the rigours of Object-Oriented Programming. C and C++ are extensively

used by professional programmers as general-purpose languages.

JAVA: Similar to C++, Java is a simplified object-oriented language in which the attributes that are susceptible

to programming errors have been removed. It was specifically designed as a network-oriented language to write

programs that would not run the risk of transmitting computer viruses and could therefore be safely downloaded

through the Internet. Applets, or small Java programs can be used to develop Web pages including a wide variety

of multimedia functions.

JAVA is a ‘secure to use over the Internet’ and platform-independent language.

Fourth-Generation Languages

Fourth-generation languages refer to software packages which are mostly written in one of the languages(FORTRAN, C and so on) for any specific application. It is very useful for the user to perform a task withoutwriting programs. The user has to enter the command to call the program which is readily available in thepackage. This language is also called command line language.

Page 43: Unit 1

Computer Organization and Operating Systems 43

Some of the commonly used 4 GL packages are dBase, FoxPro, Oracle, SQL (database management);WordStar, MS Word, PageMaker (desktop publishing); Lotus 123, MS Excel (electronic spreadsheets); AutoCAD(computer aided design and drafting); IDEAS, PRO/E, Unigraphics, Solidworks (computer aided design and solidmodelling); ANSYS, NASTRAN, and ADINA (finite element analysis for engineering components). Theseprograms, specially produced for specific tasks are called Application Software.

Fifth-Generation Languages

Fifth-generation languages are an outgrowth of research in the area of artificial intelligence. They are however,still in their infancy. PROLOG (PROgramming LOGic) was designed in the early 1970s by Alain Colmerauer,French computer scientist, and Philippe Roussel, a logician. Logical processes can be programmed and deductionscan be made autmatically by using PROLOG..

A number of other 5GL have been developed for meeting specialized needs. Some of the more popularones include PILOT (Programmed Instruction Learning, Or Testing) used to write instructional software; LOGO,a version of LISP, developed in the 1960s to help educate children about computers; SNOBOL (String-OrientedSymbolic Language), designed for list processing and pattern matching; and GPSS (General Purpose SystemSimulator), used for modelling environmental and physical events.

4. ORGANIZATION OF A DIGITAL COMPUTER

A computer is a machine that manipulates data according to a set of instructions. The ability to store and executea prerecorded list of instructions called programs makes computers extremely versatile. Hence, it is a programmableelectronic device and responds to a specific set of instructions and performs high-speed processing of numbers,text, graphics, symbols and sound. Modern computers are electronic and digital. The Central Processing Unit(CPU), wires, transistors, circuits, memory, peripheral devices, etc., are called hardware, while the instructionsand data are called software.

Technically speaking, a computer is a programmable machine, which executes a programmed list ofinstructions and also responds to new instructions that are given to it. Basically, computers are of three types:digital, analog and hybrid. The digital computer stores data in discrete units and performs arithmetical andlogical operations at very high speed. The analog computer has no memory and is slower than the digital computerbut has a continuous rather than a discrete input. The hybrid computer combines some of the advantages of digitaland analog computers.

As per Oxford Dictionary the definition of a computer is, ‘An automatic electronic apparatus for makingcalculations or controlling operations that are expressible in numerical or logical terms.’

4.1 Central Processing Unit

Execution of programs is the main function of the computer. The programs or the set of instructions are stored in

the computer’s main memory and are executed by the CPU. The CPU processes the set of instructions along

with any calculations and comparisons to complete the task (refer Figure). Additionally, the CPU controls and

activates various other functions of the computer system. It also activates the peripherals to perform input and

output functions.

Arithmetic LogicUnit (ALU)

Memory Unit

Control Unit

Central Processing Unit

Page 44: Unit 1

44 Computer Organization and Operating Systems

The CPU consists of three major components: the register set (associated with the main memory) thatstores the transitional data while processing the programs and commands, ALU which performs the necessarymicro-operations for processing the programs and commands, and the control unit that controls the transmitting ofinformation amongst the registers and directs the ALU on the instructions to follow.

I. Control Unit

The control unit not only plays a major role in transmitting data from a device to the CPU and vice versa but alsoplays a significant role in the functioning of the CPU. It actually does not process the data but manages andcoordinates the entire computer system including the input and the output devices. It retrieves and interprets thecommands of the programs stored in the main memory and sends signals to other units of the system for execution.It does this through some special purpose registers and a decoder. The special purpose register called the instructionregister holds the current instruction to be executed, and the program control register holds the next instruction tobe executed. The decoder interprets the meaning of each instruction supported by the CPU. Each instruction isalso accompanied by a microcode, i.e., the basic directions to tell the CPU how to execute the instruction.

II. Arithmetic Logic Unit

The ALU is responsible for arithmetic and logic operations. This means that when the control unit encounters aninstruction that involves an arithmetic operation (add, subtract, multiply, divide) or a logic operation (equal to, lessthan, greater than), it passes control to the ALU, which has the necessary circuitry to carry out these arithmeticand logic operations.

4.2 Memory

Memory is used for storage and retrieval of instructions and data in a computer system. The CPU containsseveral registers for storing data and instructions. But these can store only a few bytes. If all the instructions anddata being executed by the CPU were to reside in secondary storage (like magnetic tapes and disks), and loadedinto the registers of the CPU as the program execution proceeded, it would lead to the CPU being idle for mostof the time, since the speed at which the CPU processes data is much higher than the speed at which data can betransferred from disks to registers. A memory system is mainly classified into the following categories:

Primary Storage Memory

This is the main memory of the computer which communicates directly with the processor. This memory is largein size and fast, but not as fast as the internal memory of the processor. It comprises a couple of integrated chipsmounted on a printed circuit board plugged directly on the motherboard. Random Access Memory (RAM) is anexample of primary storage memory.

(i) Magnetic Core Memory: It was previously used and was termed as Random Access Computer Memory.It used small magnetic rings, termed cores, for storage.

(ii) Internal Processor Memory: This is a small set of high-speed registers placed inside a processor andused for storing temporary data while processing.

(iii) Static and Dynamic RAM: There are two types of integrated circuit RAM chips available in the market,Static RAM (SRAM) and Dynamic RAM (DRAM). Static RAM stores binary information using clockedsequential circuits. The information stored in the SRAM remains valid as long as the power is supplied tothe unit, whereas DRAM stores information inside a chip in the form of electric charges supplied to thecapacitor.

(iv) Read Only Memory: Most of the memory in a general purpose computer is made of RAM integrated

circuit chips, but a portion of the memory may be constructed using ROM chips. nently resident in the

computer and do not change once the production of the computer is completed.

The ROM portion of the main memory is used for storing an initial program called the bootstrap loader.The bootstrap loader is a program whose function is to start the computer software operation when poweris turned on. The contents of ROM remain unchanged even after the power is switched off and on again.

Page 45: Unit 1

Computer Organization and Operating Systems 45

(v) PROM: Therefore, a new kind of ROM called PROM (Programmable Read Only Memory) was designed.

This is also non-volatile in nature and can be written only once using special electronic equipment. In both

ROM and PROM, the write operation can be performed only once and the written information cannot be

edited later on.

(vi) EPROM and EEPROM: Erasable Programmable Read Only Memory or EPROMs are typically used

by R&D personnel who experiment by changing microprograms on the computer system to test their

efficiency. EPROM chips are of two types: EEPROMs (Electrically EPROM) and UVEPROM

(Ultraviolet EPROM).

(vii) Cache Memory: This is another category of memory used by modern computer systems. It temporarily

stores and supplies the data and instructions from the main memory to the internal memory (registers) to

speed up the process.

Cache memories are small, high speed memories that function between the CPU and the primary memory.

They are faster than the main memory with access time closer to the speed of the CPU.

Secondary Storage Memory

This stores all the system software and application programs and is basically used for data backups. It is much

larger in size and slower than primary storage memory. Hard disk drives, floppy disk drives and flash drives are

a few examples of secondary storage memory.

4.3 Other Significant Components

Register and Buffer

Normally, the memory cycle time is approximately 1–10 times higher than the CPU cycle time, hence temporary

storage is provided within the CPU in the form of CPU registers. The CPU registers are termed as fast memory

and can be accessed almost instantaneously.

Further, the number of bits a register can store at a time is called the length of the register. Most of theCPUs sold today have 32-bit or 64-bit registers. The size of the register is also called the word size and indicatesthe amount of data that a CPU can process at a time. Thus, the bigger the word size, the faster the speed of thecomputer to process data.

The number of registers varies in different computers. The following are the typical registers found in mostcomputers:

I. Memory Address Register (MAR): It specifies the address of memory location from which data is tobe accessed (in case of read operation) or to which data is to be stored (in case of write operation).

II. Memory Buffer Register (MBR): It receives data from the memory (in case of read operation) orcontains the data to be written in the memory (in case of write operation).

III. Program Counter (PC): It keeps track of the instruction that is to be executed next, after the executionof the current instruction.

IV. Accumulator (AC): It interacts with the ALU and stores the input or output operand. This register,therefore, holds the initial data to be operated upon, the intermediate results and final results of processingoperations.

V. Instruction Register (IR): The instructions are loaded in the IR before their execution, i.e., the instructionregister holds the current instruction that is being executed.

Motherboard

A motherboard is, the main PCB (Printed Circuit Board) basically a flat fibreglass platform which hosts the CPU,the main electronic components, device controller chips, main memory slots, slots for attaching the storagedevices and other subsystems.

Page 46: Unit 1

46 Computer Organization and Operating Systems

System Clock

Nowadays, usually all PC systems have different system clocks. There is a particular frequency for each clockto vibrate, which is measured in MHz (Megahertz). It is the smallest unit of time where the processing occurs asthe clock’s ‘tick’ and is at times termed as a cycle. Generically, the term ‘system clock’ is referred to the speedof the memory bus functioning on the motherboard rather than a processor.

CMOS

CMOS (Complementary Metal Oxide Semiconductor) is the technology for making semiconductors (integratedcircuits) like processors, chipset chips, DRAM, etc. A benefit using the CMOS is that it requires very less powerin comparison to various other semiconductor technologies. This benefit is the sole reason for CMOS being usedto minimize the amount of power required from the battery resulting in a long lasting life of the battery.

Buses

Bus is a set of lines which carries information about the data being transferred to and from in the memory. Dataalong with memory addresses is carried on the bus because it controls the location in memory about the processeddata. The speed of the bus is controlled by system clock speed and main driver of bus performance. If the datapart of the bus is wider, the more information is transmitted simultaneously; it means it gives higher performance.All computers use three types of basic buses, such as control bus, address bus and data bus (refer Figure).

Control Bus

Central ProcessingUnit

Address Bus

Memory

Control Bus

Input/Output ExternalConnections

Printer,Monitor,

Mouse, etc.

Data Bus

Buses

Expansion Slots

An expansion slot is located inside a computer on the motherboard or riser board that allows additional boards tobe connected to it. Below is a list of some of the expansion slots commonly found in IBM compatible computers

as well as other brands of computers. Also given is a graphic illustration of a motherboard and its expansion slots.

Expansion Cards

The card controller is placed on a motherboard inside a computer and is also

commonly termed as controller. It is a specific hardware which acts as an interfacebetween the motherboard and the other computer components, for example, hard

drives, optical drives, printers, keyboards and mouse. Most of the motherboard

chips have built-in controllers for essential components. Some controller cardsare installed in the PCI slot of computer. The three types of card controllers,

namely, video card, sound card and network card are discussed here.

I. Video Card: A video card is also termed as video adapter, graphics-accelerator card, display adapter and graphics card. Basically, it is an expansion card and is utilized to create

Page 47: Unit 1

Computer Organization and Operating Systems 47

output images for display on the output devices. A modern video card consists of

a printed circuit board on which the components are mounted.

II. Sound Card: A sound card is also called as an audio card and is used

as an expansion card in computer system to facilitate the input and output of

audio signals to and from a computer. It is uniquely used to provide the audiocomponents for multimedia applications, such as music composition, editing video

or audio presentation, education and entertainment (games). Some of the

computers have built-in audio/sound capability while some need additionalexpansion cards for providing audio capability.

III. Network Card: A network card is also termed as network

adapter, network interface and NIC. It is an expansion card and is installed

on a computer system which physically connects the computer to a

Local Area Network (LAN). The most familiar type of network card

used is the Ethernet card. Other types of network cards include wireless

network cards and token ring network cards.

Power Supply

Power supply term refers to a source of electrical power device or system

which provides electrical energy to an output load. It is also called a Power

Supply Unit or PSU. The parameters of power supplies are completed by

incorporating electric circuits to strongly manage the output voltage and the

current up to a specific value. This type of current regulation is specified as

stabilized power supply. The computer system uses Switched Mode Power

Supply (SMPS) as shown in the Figure.

Modem

Modem is defined as a device that modulates an analog carrier signal to encode digital information, and

also demodulates such a carrier signal to decode the transmitted information.

Modem is the short form of MOdulator/DEModulator. In simple terms, it is a communication device

that connects two computers or digital devices across a wired or wireless network. They are predominantly used

for connecting home computer users to an Internet Service Provider (ISP) in order to access the Internet.

Modems are internal or external. An internal modem is installed inside the case of the computer, directly

attached to the motherboard. An external modem is connected to the computer through a serial or Universal

Serial Bus (USB) cable.

Peripheral Devices

The computer system is a dumb and a useless machine if it is not capable of communicating with the outside

world. It is very important for a computer system to have the ability to communicate with the outside world, i.e.,

receive and send data and information.

4.4 Input Devices

I. Basic Input Devices

Input devices are used to transfer user data and instructions to the computer. The basic input devices can be

classified into the following categories:

SMPS

Page 48: Unit 1

48 Computer Organization and Operating Systems

(i) Keyboard Devices

Keyboard devices allow input into the computer system by pressing a

set of keys mounted on a board, connected to the computer system.

Keyboard devices are typically classified as general-purpose keyboards

and special-purpose keyboards.

The most popular keyboard used today is the 101 key with a

traditional QWERTY layout, with an alphanumeric keypad, 12 function

keys, a variety of special function keys, numeric keypad, and dedicated

cursor control keys. It is so called because the arrangement of its

alphanumeric keys in the upper-left row.

(ii) Mouse

A mouse is a small input device used to move the cursor on a computer screen to give instructions

to the computer and to run programs and applications. It can be used to select menu commands,

move icons, size windows, start programs, close windows, etc.

II. Special Input Devices

The keyboard facilitates input of data in text form only. While working with display based packages, we usually

point to a display area and select an option from the screen (fundamentals of GUI applications). For such cases,

the sheer user-friendliness of input devices that can rapidly point to a particular option displayed on screen and

support its selection resulted in the advent of various point-and-draw devices.

(i) Touch Screen

A touch screen is probably one of the simplest and most intuitive of all input devices. It

uses optical sensors in or near the computer screen that can detect the touch of a finger

on the screen.

(ii) Touch Pads

A touch pad is a touch sensitive input device which takes user input to control the onscreen

pointer and perform other functions similar to that of a mouse. Touch pads are pressure

and touch sensitive. They use finger drag and tapping combinations to perform multiple

control operations.

(iii) Light Pen

The light pen is a small input device used to select and display objects on a screen. It

functions with a light sensor and has a lens on the tip of a pen shaped device.

(iv) Trackball

The trackball is a pointing device that is much like an inverted mouse. It consists of a ball

inset in a small external box or adjacent to and in the same unit, as the keyboard of some

portable computers.

(v) Joystick

The joystick is a vertical stick that moves the graphic cursor in the direction the stick is moved. It

consists of a spherical ball, which moves within a socket and has a stick mounted on it.

QWERTY Keyboard

Function Keys

Numeric Keypad

Cursor Movement Keys

Space Bar

Enter Key

Shift Key

Page 49: Unit 1

Computer Organization and Operating Systems 49

Video games, training simulators and control panels of robots are some common uses of a joystick.

(vi) Scanning Devices

Scanning devices are input devices used for direct data entry from the source document

into the computer system. With the help of the scanner you can capture your images

and documents and convert it into digital formats for easy storage on your computer.

There are two types of scanners, Contact and Laser. Both illuminate the image

first to calculate the reflected light and determine the value of the captured image.

Hand-held contact scanners make contact as they are brushed over the printed matter

to be read. Laser-based scanners are more versatile and can read data passed near the scanning area.

(vii) Optical Mark Recognition

The Optical Mark Recognition (OMR) devices can scan marks from a computer-readable

paper. Such devices are used by universities and institutes to mark test sheets where the

candidate selects and marks the correct answer from multiple choices given on a special

sheet of paper. These marksheets are not required to be evaluated manually as they are

fed in the OMR and the data is then transferred to the computer system for further evaluation.

(viii) Optical Character Recognition

Optical Character Recognition (OCR) is used to translate the mechanical or electronic

images which are handwritten, typed or in printed form into machine-editable format. OCR

software is used to convert it into a text or word processor file for performing text modification. OCR is widely

used in pattern recognition, artificial intelligence and computer vision.

(ix) Magnetic Ink Character Recognition

Magnetic Ink Character Recognition (MICR) is like an optical

mark recognition device and is used only in the banking industry.

MICR devices scan cheque numbers directly from the cheque

leaflets and then automatically feed them in the computer systems

for further use, doing the job quickly, accurately and efficiently.

Banks using MICR technology print cheque books on special

types of paper.

(x) Optical Bar Code Reader

Data coded in the form of small vertical lines forms the

basis of bar coding. Alphanumeric data is representedusing adjacent vertical lines called bar codes.

(xi) Digitizer

Digitizers are used to convert drawings or pictures and maps into a digital format forstorage into the computer. A digitizer consists of a digitizing or graphics tablet, which is

a pressure-sensitive tablet, and a pen with the same X and Y co-ordinates as on the

screen.

(xiii) Web Camera

A web camera is a video capturing device attached to the computer system, mostly using a USB port

used for video conferencing, video security, as a control input device and also in gaming.

Page 1 of 4

Answer Sheet

a1.

2.

3.

4.

a

a

a

b

b

b

b

c

c

c

c

d

d

d

d

Magnetic Ink Characters

Page 50: Unit 1

50 Computer Organization and Operating Systems

(xiv) Glide Pad

Glide pad uses a touch sensitive pad for controlling cursor on the display screen. The user slidesfinger across the pad and the cursor follows the finger movement. There are buttons to click or

tap on the pad with a finger. The glide pad is a popular alternate pointing device for laptops and

is built into the keyboard. The user can use either buttons or taps of the pad for right or leftclicking. The glide pad can be installed on notebook computers, POS terminals, specialized

keyboards, touchpad, mouse replacements, etc.

4.5 Output Devices

An output device is an electromechanical device that accepts data from the computer and translates it into a formthat can be understood by the outside world. The processed data, stored in the memory of the computer, is sent

to an output unit, which then transforms the internal representation of data into a form that can be read by the

users.

Normally, the output is produced on a display unit like a computer monitor or can be printed through a

printer on paper. At times, speech outputs and mechanical outputs are also used for some specific applications.

I. Monitor or Visual Display Devices

It is almost impossible to even think of using a computer system without a display device. A display device is the

most essential peripheral of a computer system. Display screen technology may be of the following categories:

(i) Cathode Ray Tube: The Cathode Ray Tube (CRT) consists of an electron gun with an electron beam

controlled with electromagnetic fields and a phosphate-coated glass display screen structured into a grid of small

dots known as pixels. The image is created with the electron beam produced by the electron gun, which is thrown

on the phosphor coat displayed by the electromagnetic field.

(ii) Liquid Crystal Display: Liquid Crystal Display (LCD) was first introduced in the 1970s in digital

clocks and watches, and is now widely being used in computer display units. The CRT was replaced with the

Liquid LCD making it slimmer and more compact. But the image quality and the image color capability got

comparatively poorer.

The main advantage of LCD is its low energy consumption. It finds its most common usage in portable

devices where size and energy consumption are of main importance.

II. Printers

Printers are used for creating paper output. There is a huge range of commercially available printers today

(estimated to be 1500 different types). These printers can be classified into categories based on:

• Printing technology.

• Printing speed.

• Printing quality.

(i) Impact Printers

Impact printers are the oldest print technologies still in active production. Impact printers use forceful impact to

transfer ink to the media, similar to a typewriter. The three most common forms of impact printers are dot matrix,

daisy wheel and line printers.

Dot Matrix: Dot matrix printers are the most widely used impact

printers in personal computing. These printers use a print head

consisting of a series of small metal pins that strike on a paper

through an inked ribbon, leaving an impression on the paper through

the ink transferred.

Page 51: Unit 1

Computer Organization and Operating Systems 51

Daisy Wheel Printers: These printers have print heads composed of metallic

or plastic wheels cut into petals. Each petal has the form of a letter (in capital

and lower case), number or punctuation mark on it. When the petal is struck

against the printer ribbon, the resulting shape forces ink onto the paper. Daisy

wheel printers are loud and slow.

Line Printers: Line printers have a mechanism that allows multiple characters to be simultaneously printed on

the same line. Line printers, as the name implies, print an entire line of text at a time. Because of the nature of the

print mechanism, line printers are much faster than dot matrix or daisy wheel printers; however, they are quite

loud, have limited multi-font capability and often produce lower print quality.

(ii) Non-Impact Printers

Non-impact printers are much quieter than impact printers as their printing heads do not

strike the paper. It is a printer that prints without banging a ribbon onto paper. LED

(Light Emitting Diode), Laser, inkjet, solid ink, thermal wax transfer and dye sublimation

printers are examples of non-impact printers. Most non-impact printers produce dot

matrix patterns.

Inkjet: Inkjet printers are based on the use of a series of nozzles for propelling droplets of printing

ink directly on almost any size of paper. They, therefore, fall under the category of non-impact

printers.

Laser: Laser printers work on the same printing technology as photocopiers, using static electricity

and heat to print with a high quality powder substance known as toner.

Thermal Printer: A thermal printer or direct thermal printer produces a printed image by selectively heating

coated thermochromic paper or thermal paper, when the paper passes over the thermal print head. The coating

turns black in the areas where it is heated to produce an image. Two-color direct thermal printers can print both

black and an additional color (usually red), by using heat at two different temperatures.

Page Printer: Page printer prints a page at a time from four to more than 800 ppm. Laser, LED, solid ink

and electron beam imaging printers are examples of this category. All of these printers use toner or ink onto

a drum which is transferred to the entire page in one cycle for black and white and multiple cycles for color.

III. Plotters

Plotters are used to make line illustrations on paper. They are capable of producing charts, drawings, graphics,

maps and so on. A plotter is much like a printer but is designed to print graphs instead of alphanumeric characters.

Based on the technology used, there are mainly two types of plotters: pen plotters or electrostatic plotters.

(i) Flatbed Plotters

Flatbed plotters have a flat base like a drawing board on which the paper is laid. One or

more arms, each of them carrying an ink pen, moves across the paper to draw. The arm

movement is controlled by a microprocessor (chip). The arm can move in two directions,

one parallel to the plotter and the other perpendicular to it (called the X and Y directions).

With this kind of movement, it can move very precisely to any point on the paper placed

below.

The advantage of flatbed plotters is that the user can easily control the graphics. He can manually pick up

the arm anytime during the production of graphics and place it on any position on the paper to alter the position of

graphics of his choice. The disadvantage here is that flatbed plotters occupy a large amount of space.

Page 52: Unit 1

52 Computer Organization and Operating Systems

(ii) Drum Plotters

Drum plotters move the paper with the help of a drum revolver during printing. The armcarrying a pen moves only in one direction, perpendicular to the direction of the motion ofthe paper. It means that while printing, the plotter pens print on one axis of the paper andthe cylindrical drum moves the paper on the other axis.

5. PROCESSING OF DATA

Data processing can be defined as the process of converting raw data into suitable information using a series ofoperations like classifying, sorting, summarizing and tabulating it for easy storage and retrieval. Processed data iscalled information.

Data, especially large volumes of it, unless processed properly, are not of much use in the current information

driven world. Relevant information can give a definite edge to a business to stay ahead of its competition and planfor the future. In fact, the speed at which information can be extracted from data (a process called data processing)is just as crucial as the information itself. Information usually loses its value if it is not up-to-date. Automatic DataProcessing (ADP) applications are gaining wide popularity in the market to solve this very problem. They not only

save time but also reduce the cost of data processing. An ADP application, once configured is ideal for convertingsimilar structured data into specific sets of information using predefined rules of selection, processing andpresentation. Data processing can also include the conversion of one type of information into another for legacysystems transfer.

Typically, a data processing cycle can be broadly divided into five stages:

• Data Collection

• Data Preparation

• Data Input

• Data Processing

• Information Output

The stages of data processing are discussed below:

• Data Collection: Data collection is a term used to describe a process of preparing and collecting data.The purpose of data collection is to obtain information to keep on record, to make decisions aboutimportant issues and to pass information onto others. A formal data collection process is necessary asit ensures that data gathered are defined, accurate and valid.

• Data Preparation: Data preparation or data preprocessing means manipulation of data into a formsuitable for further analysis and processing. It is a process that involves many different tasks, which

cannot be fully automated. Many of the data preparation activities are routine, tedious and time consuming.Data preparation includes the process of integration and transformation which collectively evaluatesthe knowledge (report of data mining).

Process of Data Preparation

Page 53: Unit 1

Computer Organization and Operating Systems 53

Data preparation is essential for successful data mining. Poor quality of data results in incorrect andunreliable data mining results. Data preparation improves the quality of data which helps in improving the qualityof data mining results.

• Data Input: The term ‘data input’ is used to denote the changes which are inserted into a system toactivate or modify the process. It is issued in terms of input field, variable, parameter, value and file.

• Data Processing: The term ‘data processing’ refers to the computer program which is used tosummarize, analyse and convert data into usable information. The process can be automated to run ona computer. Since, data are most useful when well presented and they are much more informativetherefore data processing systems are referred to as ‘information systems’.

• Information Output: In information processing, the term ‘output’ is the process of transmittinginformation. Output can be in the form of printed paper, audio, video, etc. Data is entered throughvarious forms (input) into a computer and after processing, the information is presented in a humanreadable form (output).

Just as there are different types of data (classified either by usage, attributes or content), there are differentmethods of processing them. These are as follows:

• Real Time Data Processing: In this mode, data is processed almost immediately (in real time) and ina continuous flow. This is of particular advantage when the lifespan of information is small and corebusiness activities are involved. The advantages of real time processing are that the derived informationis up-to-date and so it is more relevant for decision making. For instance, in a bank or in an ATM(Automated Teller Machine), as soon as you deposit money in your account, your account status(balance standing to your credit) is updated instantaneously. This enables you as well as the bank toknow the exact status of funds, in real time mode, or in other words, as of this minute. Similarly, in arailway reservation system, a train ticket booked from anywhere in the world must update the centraldatabase in real time to ensure that the seat once booked is not sold to anybody else in the world. Realtime processing also requires relatively little storage as compared to batch processing.

• Batch Data Processing: Real time processing requires high speed broadband connections so that the

data inputted from different computers or locations can be used to update a centralized server and

database. Setting up such networks is expensive and not always feasible because sometimes the data

does not need to be processed immediately. For instance, in a BPO (Business Processing Outsourcing)

outfit, hundreds of operators may be inputting data, which can be made available to the client only after

it is checked and verified by a supervisor(s). Such situations call for batch mode processing, which is

used when the conversion of data into information is not required to be done immediately and therefore

this data processing is done in lots or batches. The advantages of batch processing are that it is cheaper

and processing can be done offline.

It should be noted that data processing and data conversion are technically quite different whereas, data

conversion only means converting data from one form to another, data processing means conversion of data into

information or sometimes vice versa.

Sorting

As we know that file is a group of records, now the need arises to arrange these records in a file in any order,

either ascending or descending order. To arrange these records in any specific order in a file is called sorting.

Sequencing is based on some key of the record that could be numeric like enrolment number of a student or

alphabetic like names of the students following ASCII sequence. The simplest case is sorting the records using

one field of the record as the primary key, so that the key must be unique, i.e., no two students can have same

enrolment number, hence a file of student records in a university may be sequenced by ascending order of

student enrollment number.

Page 54: Unit 1

54 Computer Organization and Operating Systems

By introducing a further key in the sorting process, a more complex order may be produced, for example,

suppose each record of the student file also contains a field for the course code in which course the student is

enrolled. Now the order of sorting may be student enrolment number within course. This means that all the

records for one course code are presented first, each one in ascending sequence of enrolment number; then all

the records for the next course code are presented in sequence, and so on. In this example, two keys have been

used in the sorting process—course code is called primary key student enrolment number is known as the

secondary key.

Since sorting is a very common data-processing requirement, manufacturers provide sort utility software

which enables users to specify their particular sequencing requirements by means of simple parameters. Software

is usually available for sorting files held on all types of storage devices. The user specifies the sort keys and also

the details about the type of file, such as storage device, file labels, record structure. The sort utility program read

the un-sequenced input file, and by means of various copying techniques ultimately produces as output a copy of

the input file in the required sequence.

An Unsorted Table

Enrolment_No Name

1001 Ankit

1005 Sahil

1003 Cimone

1004 Dolly

1002 Ankita

A Sorted Table

Enrolment_No Name

1001 Ankit

1002 Ankita

1003 Cimone

1004 Dolly

1005 Sahil

Note: Enrolment_No is the primary key on which sorting is done.

Merging

Let us take an example to understand the concept of merging two files. Let there be one file by the name File1

and the other file be named as File2. Both the files have student records of Class VII, say, arranged in increasing

order of enrolment number.

File1 is a file maintaining the records of the students participating in sports competition and File2 maintains

the records of students of the same class but participating in music competition. Both the files have records

arranged in ascending order of enrolment number.

Now, Class_7_file must consist of all the records of all the students studying in that class, in spite of what

hobby class they choose. Merging the above said two files File1 and File2 could create this file and as mentioned

above, the two files to be merged must be sorted in the same specific order before we could merge them. Now,

File1 (students of 7th participating in sports) and File2 (students of 7th participating in music), are merged and we

will get the file named Class_7_file consisting of all the names of the students who are studying in that class.

File 1 Students Participating in Sports

Enrolment_No Name Activity

1001 Ankit Music

1003 Cimone Sports

1004 Dolly Sports

Page 55: Unit 1

Computer Organization and Operating Systems 55

File 2 Students Participating in Music

Enrolment_No Name Activity

1002 Ankita Music

1005 Sahil Sports

1006 Deepa Music

File2

Now if you could observe that after the enrolment number 1001, the next key 1002 is present in File2,

hence merging the two files will order the records. That means both the files that are opened to read and record

is read one by one and the keys are compared, whichever is less in order value is being written into a new file, i.e.,

in this case Class_7_file. In this way when EOF is reached for both the files the result file Class_7_file is

consisting of all records for all the students of that class. In case EOF for any one file reaches first then the left

over records in second file are written to the result file.

Here we have a computerized list of all students records for class 7th.

To summarize, it may be said that merging of files involves the combining of records from two or more ordered

files into a single ordered file. Each of the constituent files must be in the same order. The output file will be in the

same order as the input files, placing records from each in their correct relative order as explained earlier.

6. DATA ORGANIZATION

6.1 Introduction to Data

Data comprises raw facts and/or figures from which meaningful conclusions can be easily drawn. When the data

is recorded, classified, organized and related or interpreted within a framework, it provides meaningful information.

Information can be defined as ‘data that has been transformed into a meaningful and useful form for specific

purposes’. Data is represented by the letters of the alphabets or numerals, while the information may be represented

in the form of tables, graphs, charts, etc.

Classification of Data

For data management purposes, data is broadly classified into two categories: (i) Structured and (ii) Unstructured

data.

Structured Data

Structured data or structured information is the data stored in fixed fields within a file or a record. This form of

data representation is also known as ‘Tabular Data’, where data sets are organized in the form of a table.

Structured data is managed by techniques that work on query and reporting against programmed data types and

clear relationships. Databases and spreadsheets are examples of structured data.

Unstructured Data

People use and create unstructured data everyday, although they may not be aware, a word processed letter or

e-mail, in fact any document and images, such as those captured by a digital camera are all examples of ‘Unstructured

Data’. Unstructured data primarily consists of ‘Textual Data’ and ‘Image Data’. Textual data being any string of

text, this could be a whole book or simply a short note. Images are digital pictures, such as photographs and maps.

6.2 Organization of Data

Data can be organized in variety of formats. In all, the hierarchical organization of data is widely accepted. In

order to understand the organization of data, you need to be acquainted with the following terms:

Page 56: Unit 1

56 Computer Organization and Operating Systems

Data Item

A data item is the smallest unit of information, which is stored in the related field in a file. For example, E-101,

‘Morris’, 5000.00 are the data items stored under the fields Employee ID, Employee Name, Employee Salary of

an employee data file. These items are created using characters, numerals and special characters.

Data Field

Data items are stored in data fields for easy retrieval. Fields may be of a fixed length or variable lengths. Fields

can be defined as numeric type fields for storing numeric values, such as 5000.00 (Employee Salary); alphabetic

type fields for storing literals, such as ‘Morris’ (Employee Name); and alphanumeric type fields for storing

alphanumeric values, such as E-101 (Employee ID).

Records

Related fields are grouped together to form a record. So, a record is a collection of fields. Each record corresponds

to specific unit of information.

Files and Databases

Related records are grouped together to form a file. So, a file is a collection of records. Files are usually stored on

storage media, such as floppy disks, magnetic tapes or hard disks.

Related files are grouped together to form a database. So, the collection of related files is called database.

A database provides a convenient environment to store and retrieve information. Traditional databases follow a

hierarchical model. Other popular database models are network model, relational model and Object Oriented

(OO) model.

6.3 Sequential Access

Often, it is required to process the records of a file in the sorted order based on the value of one of its fields. If the

records of the file are not physically placed in the required order, it takes time to fulfil this request. However, if the

records of that file are placed in the sorted order based on that field, we would be able to efficiently fulfil this

request. A file organization in which records are sorted based on the value of one of its fields is called sequential

file organization and such a file is called a sequential file. In a sequential file, the field on which the records are

sorted is called the ordered field. This field may or may not be the key field. In case the file is ordered on the

basis of the key, then the field is called the ordering key.

Note: A sequential file does not make any improvement in processing the records in random order.

A operation like searching records is more efficient in a sequential file if the search condition is specified

on the ordering field because binary search is applicable instead of linear search. Moreover, retrieval of records

with the range condition specified on the ordering field is also very efficient. In this operation, all the records,

starting with the first record satisfying the range selection condition till the first record that does not satisfy the

condition, are retrieved. However, handling the deletion and insertion operations are complicated. Deletion of a

record leaves a blank space between the two records. This situation can be handled by using a deletion marker.

When a new record is to be inserted in a sequential file, there are two possibilities. First, the record needs to be

inserted at its actual position in the file. Obviously, it requires locating the first record that has to come after the

new record and making space for the new record. Making space for a record may require shifting a large number

of records and this is very costly in terms of disk access. Secondly, we can insert that record in an overflow area

allocated to the file instead of its correct position in the original file. Note that the records in the overflow area are

unordered. Periodically, the records in the overflow area are sorted and merged with the records in the original

file.

Page 57: Unit 1

Computer Organization and Operating Systems 57

The second approach of insertion makes the insertion process efficient; however, it may affect the search

operation. This is because the required record needs to be searched in the overflow area using linear search if it

is not found in the original file using binary search.

6.4 Random Access

Unlike sequential file, records in this file organization are not stored sequentially. Instead, each record is mapped

to an address on the disk on the basis of its key value. One such technique for the mapping of a record to an

address is called hashing. Hashing consists of two parts, i.e., a hash function and a collision resolution

technique.

When a record is to be inserted in a random file, a hash function is applied on the key value of the record

that gives the page address where the record is to be placed. If a record is mapped to page, which is already full,

then another page address is computed for the record using the collision resolution technique.

Since each record is placed at the page indicated by the hash function, searching for a record is simple.

The same hash function is applied on the key value of the record to be searched which gives the address of the

page where the desired record may be found. Then, all the records of that page are examined to locate the

desired record. If the desired record is not found in that page, the address of another page is computed according

to the method employed in the collision resolution technique.

6.5 Indexed Sequential Access

In an indexed sequential organization, the records are stored in physical sequence according to the primary key.

The file management system builds an index separate from the data records and contains key values together

with pointers to the data records themselves. This index permits individual records to be accessed at random

without accessing other records. The entire file can also be accessed sequentially.

This type of file consists of three main parts, namely the file index, the prime or home area and the

overflow area. The prime area is where the data records are loaded in sequential order when the file is first

created. The overflow area is where additions to the file that cannot be accommodated in the prime area are

stored. The index area holds the set of ‘pointers’ to enable individual records to be located. A set of index contains

the relevant record keys and corresponding record addresses. Access and retrieval of a specific record is affected

through the use of the index. The overflow area is linked to the rest of the file through a system of pointers

maintained in the index. There are two basic implementations of the indexed sequential organization.

• Hardware dependent—the access method that supports this organization is called Indexed Sequential

Access Method (ISAM).

• Hardware independent—the access method that supports hardware independent organization is Virtual

Sequential Access Method (VSAM).

7. PROGRAMMING PROCESS

7.1 Introduction to Programming

A program is a set of instructions that a computer can interpret and execute; sometimes the instruction it has to

perform depends on what happened when it performed a previous instruction. The programming concept is

purely based on Input Processing Output (IPO) chart as shown in the Figure.

Input Processing Output Chart

Page 58: Unit 1

58 Computer Organization and Operating Systems

Figure 3.2 shows three inputted values assigned for three variables, such as Number1, Number2 and

Number3. In processing part, three numbers are read and then will be added together to print the total

value. The total variable is taken as the result. This problem can be solved by the following algorithm:

Add-Three-Numbers

Read Number1, Number2, Number3

Total = Number1 + Number2 + Number3

Print Total

END

Interpreters: With an interpreter, the language comes as an environment, where you type in commands

at a prompt and the environment executes them for you. For more complicated programs, you can type the

commands into a file and get the interpreter to load the file and execute the commands in it. If anything goes

wrong, many interpreters will drop you into a debugger to help you track down the problem.

Compilers: First of all, you write your code in a file (or files) using an editor. You then run the compiler

and see if it accepts your program. If it did not compile, go back to the editor; if it did compile and gave you a

program, you can run it either at a shell command prompt or in a debugger to see if it works properly. In fact,

distributing a program written for a compiler is usually more straightforward than one written for an interpreter—

you can just give them a copy of the executable, assuming they have the same operating system as you. Compiled

languages include Pascal, C and C++. C and C++ languages are best suited to more experienced programmers.

Pascal, on the other hand, was designed as an educational language and is quite a good language to start with.

Characteristics of a Good Program

The characteristics of a good program are as follows:

Minimal Complexity: The main goal in any program should be to minimize complexity. As a developer,

most of your time; you will be maintaining or upgrading existing code.

Ease of Maintenance: This is making your code easy to update. Find where your code is most likely

going to change and make it easy to update.

Loose Coupling: It takes place when one portion of code is not dependant on another to run properly. It

is bundling code that does not rely on any outside code.

Extensibility and Reusability: This means that you design your program so that you can add or remove

elements from your program without disturbing the underlying structure of the program.

Portability: Design a system that can be moved to another environment to work on different platforms.

Leanness: Leanness means making the design with no extra parts. Everything that is within the design

has to be in the programming part.

Stages of Program Development

The processes involved in developing the program development are as follows:

Domain Analysis: Often the first step in attempting to design a new piece of software, whether it can bean addition to existing software, a new application, a new subsystem or a whole new system is what can begenerally referred to as domain analysis.

Software Elements Analysis: The most important task in creating a software product is extracting therequirements. Customers typically have an abstract idea of what they want as an end result, but not whatsoftware should do. Incomplete, ambiguous or even contradictory requirements are recognized by skilled andexperienced software engineers at this point.

Requirements Analysis: Once the general requirements are gleaned from the client, an analysis of thescope of the development should be determined and clearly stated. This is often called a scope document.

Page 59: Unit 1

Computer Organization and Operating Systems 59

Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirementsat the start of development.

Specification: Specification is the task of precisely describing the software to be written, possibly in arigorous way. In practice, most successful specifications are written to understand and fine-tune applications thatwere already well developed, although safety-critical software systems are often carefully specified prior toapplication development. Specifications are most important for external interfaces that must remain stable.

Software Architecture: The architecture of a software system refers to an abstract representation ofthat system. Architecture is concerned with making sure the software system will meet the requirements of theproduct, as well as ensuring that future requirements can be addressed.

Implementation: This is the part of the process where software engineers actually program the code forthe project.

Testing: Testing software is an integral and important part of the software development process. Thispart of the process ensures that bugs are recognized as early as possible.

Deployment: After the code is appropriately tested, it is approved for release and sold or otherwisedistributed into a production environment.

Documentation: Documenting the internal design of software for the purpose of future maintenance andenhancement is done throughout development.

Software Training and Support: A large percentage of software projects fail because the developers failto realize that it does not matter how much time and planning a development team puts into creating software ifnobody in an organization ends up using it.

Maintenance: Maintaining and enhancing software to cope with newly discovered problems or newrequirements can take far more time than the initial development of the software. It may be necessary to addcode that does not fit the original design to correct an unforeseen problem or it may be that a customer isrequesting more functionality and code can be added to accommodate their requests.

7.2 Algorithms

According to Niklaus Wirth, a computer scientist, Programs consist of algorithms and data, i.e.,

Programs = Algorithms + Data

An algorithm is an important component of the blueprint or plan for a computer program. In other words,an algorithm may be defined as ‘an effective procedure to solve a problem in a finite number of steps’. It meansthat an answer has been found and it involves a finite number of steps. A well-framed algorithm always providesan answer. It is not necessary that the answer will be the one you want. However, there must be some answer.Maybe you get the answer that there is no answer. A well-designed algorithm is also guaranteed to terminate.

The term ‘algorithm’ is derived from the name of Al-Khwarizmi, a Persian mathematician. It may bedefined as a finite set of well defined instructions to accomplish some task, which, given an initial state, willterminate in a corresponding recognizable end-state.

Algorithms can be implemented by computer programs although often in restricted forms; an error in thedesign of an algorithm for solving a problem can lead to failures in implementing a program.

Thus, an algorithm is a step-by-step problem solving procedure, especially an established, recursivecomputational procedure for solving a problem in a finite number of steps.

Characteristics of an Algorithm

As already mentioned, an algorithm is a finite set of instructions that accomplishes a particular task. An algorithm

must satisfy the following criteria:

• Input: Zero or more items to be given as input.

• Output: At least one item is produced as output.

Page 60: Unit 1

60 Computer Organization and Operating Systems

• Definiteness: The instructions, which are used in algorithm, should be clear and unambiguous.

• Finiteness: The algorithm should terminate after a finite number of steps.

• Effectiveness: Each and every instruction should be simple and very basic.

7.3 Flow Charts

Flow chart is a symbolic representation of an algorithm or process representing the data in boxes connected with

arrows in the direction of flow of data. This flow chart is basically used by system analysts through which they go

through to design the successful system implementation.

The features of system flowcharts are as follows:

• The sources from which data is generated and device used for this purpose.

• Various processing steps are involved.

• The intermediate and final output prepared and the devices used for their storage.

Flow chart provides an easy and effective approach for preparing a document and analysing the flow of data and

control in a system. It helps verify and validate the developed program for appropriate debugging. Table below

lists the symbols that you can use for creating a system flow chart.

Table Flow Chart Symbols and their Use

Symbol Name Description

Start or End Specifies the start and end points

of a flow chart

Arrows Connects different flow chart symbols

Process Specifies the condition to be

evaluated

Input or Output Specifies the input or the output of

the process

Decision Specifies whether or not the specified condition is true

Off-page Connector Helps connect the flow chart drawn on different pages

Stored Data Specifies the name of the file stored on the disk

Connector Connects two parts of a program

7.4 Coding

A program always has two sets of code, source code and object code. Source code is what is created beforehand,

the language that the programmer uses to give instructions to the computer’s compiler in order to make the

program run. The result of the compiler having compiling these source code instructions is called object code.

The terms are intuitive, in that source code is the means to the end that is object code. In other words, the source

code is the beginning, or source, of the operation and the object code is the desired result, or object, of the whole

exercise. Object code is stored in files that are created by the computer’s compiler and can then become the

ultimate end intended by the programmer. Once source code has been compiled into files, it can then continue on

to the computer’s processor, which executes the final instructions. Commonly available software applications are

Page 61: Unit 1

Computer Organization and Operating Systems 61

huge collections of object code, which cannot be altered fundamentally since the source code is not included. It is

like having the solution to the problem but not all the steps used to solve that problem.

7.5 Debugging and Testing

In computers, debugging is the process of locating and fixing or bypassing bugs (errors) in computer program

code or the engineering of a hardware device. To debug a program or hardware device is to start with a problem,

isolate the source of the problem and then fix it. A user of a program that does not know how to fix the problem

may learn enough about the problem to be able to avoid it until it is permanently fixed. Debugging the programs

means implying and fixing the bugs. Debugging is a necessary process in almost any new software or hardware

development process, whether a commercial product or an enterprise or personal application program. Debugging

tools known as debuggers help to identify the coding errors at various development stages. Some programming

language packages include a facility for checking the code for errors as it is being written. While there are many

types of debugging tools, a simple example is a tool that allows the programmer to monitor program code while

manipulating it to execute various commands and routines.

Testing is the precursor to debugging. It is commonly the forte of programmers and advanced users, and

occurs when a product is new or is being updated and needs to be put through its paces to eliminate potential

problems. Testing identifies bugs or imperfections so that they can be corrected in the debugging process.

7.6 Structural Programming

Structured analysis is a top-down approach, which focuses on refining the problem with the help of functions

performed in the problem domain and data produced by these functions. This approach facilitates the software

engineer to determine the information received during analysis and to organize the information in order to avoid

the complexity of the problem. The purpose of structured analysis is to provide a graphical representation to

develop new software or enhance the existing software. Generally, structured analysis is represented using a

data-flow diagram.

Data-Flow Diagram (DFD)

IEEE defines a data-flow diagram (also known as bubble chart and work flow diagram) as, ‘a diagram that

depicts data sources, data sinks, data storage and processes performed on data as nodes and logical flow

of data as links between the nodes.’ DFD allows the software development team to depict flow of data from

one process to another. In addition, DFD accomplishes the following objectives.

• It represents system data in a hierarchical manner and with required levels of detail.

• It depicts processes according to defined user requirements and software scope.

A DFD depicts the flow of data within a system and considers a system as a transformation function that

transforms the given inputs into desired outputs. When there is complexity in a system, data needs to be transformed

using various steps to produce an output. These steps are required to refine the information. The objective of

DFD is to provide an overview of the transformations that occur in the input data within the system in order to

produce an output.

A DFD should not be confused with a flowchart. A DFD represents the flow of data whereas a flowchart

depicts the flow of control. Also, a DFD does not depict the information about the procedure to be used for

accomplishing the task. Hence, while making a DFD, procedural details about the processes should not be

shown. DFD helps the software designer to describe the transformations taking place in the path of data from

input to output

A DFD consists of four basic notations (symbols), which help to depict information in a system. These

notations are listed in Table.

Page 62: Unit 1

62 Computer Organization and Operating Systems

DFD Notations

Name Notation Description

External entity Represents the source or destination of data within the

system. Each external entity is identified with a

meaningful and unique name.

Data flow Represents the movement of data from its source to

destination within the system.

Data store Indicates the place for storing information within the

system.

Process Shows a transformation or manipulation of data within

the system. A process is also known as bubble.

While creating a DFD, certain guidelines are followed to depict the data-flow of system requirements

effectively. These guidelines help to create DFD in an understandable manner. The commonly followed guidelines

for creating DFD are listed below.

• DFD notations should be given meaningful names. For example, verbs should be used for naming a process

whereas nouns should be used for naming external entity, data store, and data-flow.

• Abbreviations should be avoided in DFD notations.

• Each process should be numbered uniquely but the numbering should be consistent.

• A DFD should be created in an organized manner so that it is easily understood.

• Unnecessary notations should be avoided in DFD in order to avoid complexity.

• A DFD should be logically consistent. For this, processes without any input or output and any input without

output should be avoided.

• There should be no loops in a DFD.

• A DFD should be refined until each process performs a simple function so that it can be easily represented

as a program component.

• A DFD should be organized in a series of levels so that each level provides more detail than the previous

level.

• The name of a process should be carried to the next level of DFD.

• The data store should be depicted at the context level where it first describes an interface between two or

more processes. Then, the data store should be depicted again in the next level of DFD that describes the

related processes.

8. DATA COMMUNICATION

8.1 Classification of Networks: LAN, MAN, WAN

Computers are connected by many different technologies. An interconnection between more than one computer,

over a virtual and shared connection, in a client-to-server or peer-to-peer manner is called a network. That is to

say, so that the flow of information is accommodated, computer resources are connected using networks. This is

just the opposite of the old terminal-to-host hardwired connection. Although a network can support terminal-to-

host connections through terminal emulators or a terminal server, it offers a lot more flexibility in switching

connections. The disadvantage of this explosion in terms of sharing information arises when one computer wishes

Page 63: Unit 1

Computer Organization and Operating Systems 63

to share its information system with another which has different network protocols and different network technology.

As a result, even if you could agree on a type of network technology to physically interconnect the two computers

at different locations, your applications still would not be able to communicate with each other because of the

different protocols.

A very basic question arises about the requirement of networks. This may be justified with the help of the

following points:

• Sharing of resources can be easily done.

• Reliability—There is no central computer, so if one breaks down you can use others.

• Networks allow you to be mobile.

The term networking applies to:

• The exchange of information among institutions, groups or individuals.

• The process of data communications or electronic voice.

Communication networks are broadly categorized into three categories as follows:

Local Area Network

The Local Area Network (LAN) technology connects machines and people within a site. LAN is a network that

is restricted to a relatively small area as shown in Figure. LANs can be defined as privately-owned networks

offering reliable high speed communication channels that are optimized to connect information processing equipment

in a small and restricted geographical area, namely, an office, a building, a complex of buildings, a school or a

campus.

A LAN is a form of local (limited-distance), shared packet network for computer communications. LANs

interconnect peripherals and computers over a common medium so that users are able to share access to peripherals,

files, databases, applications and host computers. They can also provide a connection to other networks either

through a computer, which is attached to both networks, or through a dedicated device called a gateway.

Local Area Network (LAN)

The components used by LANs can be categorized into hardware, cabling protocols and standards.

Various LAN protocols are Ethernet, Token Ring: Asynchronous Transfer Mode (ATM), NetBIOS and NetBeui,

TCP/IP, Fibre Distributed Data Interchange (FDDI), SMB and IPX/SPX.

Metropolitan Area Network

Such large geographic areas as districts, towns and cities are covered by a Metropolitan Area Network (MAN).

By linking or interconnecting smaller networks within a large geographic area, information is conveniently distributed

throughout the network. Local libraries and government agencies often use a MAN to establish a link with private

industries and citizens. It may also connect MANs together within a larger area than a LAN. The geographical

limit of a MAN may span a city. Figure depicts how a MAN may be available within a city.

Page 64: Unit 1

64 Computer Organization and Operating Systems

Ethernet Ring Network

Router

Router

LocalTelephoneExchange

Metropolitan Area Network (MAN)

In a MAN, different LANs are connected through a local telephone exchange. Some of the widely used

protocols for MAN are ATM RS-232, OC-3 lines X.25, Asymmetrical Digital Subscriber Line (ADSL), Frame

Relay, Integrated Services Digital Network (ISDN) and (155 Mbps), etc. These protocols are quite different

from those used for LANs.

Wide Area Network

This technology connects sites that are in diverse locations. Wide Area Networks (WANs) connect such large

geographic areas, as the world, India or New Delhi. There is no geographical limit of WAN. This kind of network

can be connected by using satellite uplinks or dedicated transoceanic. Hence, a WAN may be defined as a data

communications network covering a relatively broad geographical area to connect LANs together between

different cities with the help of transmission facilities provided by common carriers, such as telephone companies.

WAN technologies operate at the lower three layers of the OSI reference model. These are the physical data link

and network layers.

Figure explains the WAN, which connects many LAN together. It also uses switching technology provided

by local exchange and long distance carrier.

Wide Area Network (WAN)

Packet switching technologies, such as Frame Relay, SMDS, ATM and X.25 are used to implement WAN

along with statistical multiplexing to allow devices to use and share these circuits.

8.2 Satellite Communication

Satellite radio, quite simply, is a non-terrestrial microwave transmission system utilizing a space relay station.

Satellites have proved invaluable in extending the reach of video communications, data and voice, around the

globe and into the most remote regions of the world. Exotic applications such as the Global Positioning System

(GPS) would have been unthinkable without the benefit of satellites.

Contemporary satellite communications systems involve a satellite relay station that is launched into a

geostatic, geosynchronous or geostationary. Such satellites are called geostationary satellites. Such an orbit is

approximately 36,000 km above the equator as depicted in Figure. At that altitude, and in an equatorial orbital slot,

the satellite revolves around the Earth with the same speed as that of the speed of revolution of the Earth and

maintains its relative position over the same spot of the Earth’s surface. Consequently, transmit and receive Earth

stations can be pointed reliably at the satellite for communication purposes.

Page 65: Unit 1

Computer Organization and Operating Systems 65

Satellites in Geostationary Earth Orbit

The popularity of satellite communication has placed great demands on the international regulators to

manage and allocate available frequencies, as well as the limited number of orbital slots available for satellite

positioning are managed at national, regional and international levels. Generally speaking, geostationary satellites

are positioned approximately 2º apart in order to minimize interference from adjacent satellites using overlapping

frequencies.

Such high-frequency signals are especially susceptible to attenuation in the atmosphere. Therefore, in case

of satellite communication, two different frequencies are used as carrier frequencies to avoid interference between

incoming and outgoing signals. These are:

Uplink Frequency: It is the frequency used to transmit signals from an Earth station to a satellite.

Table 1.1 shows that the higher of the two frequencies is used for the uplink. The uplink signal can be

made stronger to cope better with atmospheric distortion. The antenna at the transmitting side is

centered in a concave, reflective dish that serves to focus the radio beam, with maximum effect, on the

receiving satellite antenna. The receiving antenna, similarly, is centred in a concave metal dish, which

serves to collect the maximum amount of incoming signals.

Downlink Frequency: It is the frequency used to transmit the signal from satellite to Earth station. In

other words, the downlink transmission is focussed on a particular footprint, or area of coverage. The

lower frequency, used for the downlink, can better penetrate the Earth’s atmosphere and electromagnetic

field, which can act to bend the incoming signal much as light bends when entering a pool of water.

8.3 The Internet

The Internet, World Wide Web and Information Super Highway are terms which are used by millions of people all

over the world. The widespread impact of Internet across the globe could not be possible without the development

of Transmission Control Protocol/Internet Protocol (TCP/IP). This protocol suite is developed specifically for the

Internet. The Information Technology revolution could not have been achieved without this boundless chain of

networks. It has become a fundamental part of the lives of millions of people all over the world. All the aforesaid

services provide us the necessary backbone for information sharing in organizations and within common interest

groups.

During late 1960s and 70s, organizations were inundated with many different LAN and WAN technologies,

such as packet switching technology, collision-detection local area networks, hierarchical enterprise networks,

and many others. The major drawbacks of all these technologies were that they could not communicate with

each other without expensive deployment of communications devices. Consequently, multiple networking models

were available as a result of the research and development efforts made by many interest groups. This paved the

Page 66: Unit 1

66 Computer Organization and Operating Systems

way for development of another aspect of networking known as protocol layering. This allows applications to

communicate with each other at high speed and comparatively at low cost.

Typically, the word Internet is a short form of a complete word Internetwork or interconnected network.

Therefore, it can be said that the Internet is not a single network, but a collection of networks. The Internet is

known as ‘the Network of Networks’. It is like a phone system that connects almost anywhere around the world.

It exchanges information and acts as a global link between small regional networks. Internet services offer a

gateway to a myriad of online databases, library catalogues and collections, and software and document archives,

in addition to frequently used store-and-forward services, such as UserNet News and e-mail the commonality

between them in order to communicate with each other is TCP/IP. The Internet consists of the following groups

of networks:

• Backbones: These are large networks that exist primarily to interconnect other networks. Some examples

of backbones are NSFNET in the USA, EBONE in Europe and large commercial backbones.

• Regional networks: These connect, for example, universities and colleges. ERNET (Education and

Research Network) is an example in the Indian context.

• Commercial networks: They provide access to the backbones to subscribers, and networks owned by

commercial organizations for internal use and also have connections to the Internet. Mainly, Internet

Service Providers come into this category.

• Local networks: These are campus-wide university networks.

The networks connect users to the Internet using special devices that are called gateways or routers.

These devices provide connection and protocol conversion of dissimilar networks to the Internet. Gateways or

routers are responsible for routing data around the global network until they reach their ultimate destination as

shown in Figure. The delivery of data to its final destination takes place based on some routing table maintained

by router or gateways. These are mentioned at various places in this book as these are the fundamental devices

to connect similar or dissimilar networks together.

Over time, TCP/IP defined several protocol sets for the exchange of routing information. Each set pertains

to a different historic phase in the evolution of architecture of the Internet backbone.

Ethernet10 Mbps

Ethernet10 Mbps

Router

Router

Router

WAN1200-600Mbps

WAN1200-600Mbps

Token-ring4Mbps,16Mbps

Local Area Networks Connected to the Internet via Gateways or Routers

Page 67: Unit 1

Computer Organization and Operating Systems 67

9. OPERATING SYSTEM

9.1 Introduction to Operating System

An Operating System (OS) is the primary control program for managing all other programs in a computer. The

other programs, commonly referred to as ‘application programs’, use the services provided by the OS through a

well-defined Application Program Interface (API). Every computer necessarily requires some type of operating

system that tells the computer how to operate and utilize other programs installed in the computer.

Below is an abstract view showing these components of the computer system.

. . . . . . . . . .

Calculator Games MS Word Application Programs

Hardware

OPERATING SYSTEM

User-2 User-NUser-1

9.2 Types of Operating System

The operating systems are classified on the basis of their functions:

(a) Multitasking or Multiprogramming: This type of OS permits multiple programs to be run simultaneously

by the same computer. A user of the computer can simultaneously play games while a Word document is being

printed, i.e., the user is simultaneously working with two different applications–Word and Games. Operating

systems supporting multitasking include UNIX and the Windows range.

(b) Multithreading: Multithreading is a form of multitasking that permits multiple parts of a software program

to be run simultaneously; for example, a user can perform a spell check on a Word document and simultaneously

print another Word document. User is working with two different components, one is Word Spell Check andanother is printing of the same application, Word. Operating systems supporting multitasking include UNIX,Windows, etc.

(c) Multiprocessing: Multiprocessing involves the use of multiple processors (more than one CPU) tosimultaneously execute multiple programs. The inclusion of multiple CPUs in a single computer system improvesthe performance to a large extent. Multiprocessing involves simultaneous processing by a computer systemhaving multiple CPUs, whereas multitasking involves simultaneous processing by a computer system with singleCPU. Operating systems supporting multiprocessing include UNIX, Windows NT, etc.

(d) Single User: This type of OS does not permit multiple users to use the computer and run programs at thesame time. This assumes that at any given time only one user uses the system and runs only one program, i.e., itdoes not allow two users to concurrently work on the same program, e.g., MS DOS.

(e) Multiuser: This type of OS permits multiple users to use the computer and run programs at the same time,e.g., UNIX, Linux, Windows NT.

(f) Parallel System: A parallel system is the outcome of the thoughts concerned about the limits on the ultimatespeed of computation achievable using the propagation speed of electronic signals. As we know, the speed ofelectronic signals is 10 × 1010 centimetre per second, and hence there is a limit on the physical size of the CPUchip when operated at speeds above 1 GHz ( 109 cycles per second). This led to the development of parallelprocessing systems with multiple CPUs/ computers.

(g) Distributed System: A distributed system composed of large numbers of computers connected by an high-speed networks. The computers in the distributed system are independent, but they appear to the users as a largesingle system. In distributed systems, users can work from any of the independent computers as terminals. Theapplications or programs run on any of the computers distributed even geographically distant places, and possibly

Page 68: Unit 1

68 Computer Organization and Operating Systems

on the computer (intelligent terminal) from which the user entered command to run the program. A distributed

system will have only a single file system, with all files of a user accessible (as dictated by the permissions

associated with each file) from any computers of the system with the same access permissions and using the

same path name.

(h) Real Time System: A real time operating system is the one which responds to real time events or inputs

within a specified time limit. The importance of real time systems has increased especially in today’s advanced

usage of embedded applications. Real time systems include satellite control systems, process control systems,

control of robots, air-traffic control systems and autonomous land rover control systems. A real time system must

work with a guaranteed response time depending up on the task; otherwise, the application might fail. In a real

time operating system, some of the tasks are real time, and the others are ordinary computing tasks. Real time

tasks are classified as hard and soft. A hard real time task must meet its deadline to avoid undesirable damage

to the system and other catastrophic events. A soft real time task also has deadline, but failures to meet it will not

cause a catastrophe or great losses.

(a) UNIX

UNIX is an operating system originally developed in 1969 by the employees of AT&T. The most significant stage

in the early development of UNIX was in 1973 when it was rewritten in the C programming language (also anAT&T development). This was significant because C is a high level programming language, meaning it is written

in a form that is closer to human language than machine code. The philosophy among the IT community at the

time dictated that since operating systems dealt primarily with low level and basic computer instructions, theyshould be written in low level languages that were hardware specific, such as assembly language. The advantages

that developing in C gave UNIX were portability, the need to make very little changes for the operating system to

run on other computing platforms. This portability made UNIX widely used among the IT community whichconsisted predominantly of higher education institutions, government agencies and the IT and telecommunication

industries.

Currently the main use of UNIX systems is for Internet or network servers. Commercial organizationsalso use UNIX for workstations and data servers. UNIX has been used as a base for other operating systems;

for example, Mac OS X is based on a UNIX kernel. An operating system that conforms to industry standards of

specifications can be called a UNIX system. Operating systems that are modelled on UNIX but do not conformstrictly to these standards by design are known as UNIX-like systems. Initially UNIX systems used Command

Line Interface (CLI) for user interaction but now many distributions come with a Graphical User Interface

(GUI).

(b) LINUX

Linux is a UNIX-like operating system originally developed by Linus Torvalds, a student of University of Helsinki.

Since the complete source code for Linux is open and available to everyone, it is referred to as Open Source. Theuser has the freedom to copy and change the program or distribute it between friends and colleagues.

Technically, Linux is strictly an OS kernel. The kernel is the core of an operating system. The first Linuxkernel was released to the public in 1991. It had no networking, ran on limited PC hardware and had little device

driver support. Later versions of Linux come with a collection of software including GUI, server programs,

networking suites and other utilities to make it a more complete OS. Typically an organization will integratesoftware with the Linux kernel and release what is called a Linux Distribution. Examples of popular Linux

Distributions are Red Hat, Mandrake, SuSE. These organizations are commercial ventures, selling their distributions

and developing software for profit.

Linux is primarily used as an OS for network and Internet servers. Lately it has gained popularity as a

desktop OS for general use since the wider inclusion of GUIs and office suite software in distributions.

Page 69: Unit 1

Computer Organization and Operating Systems 69

(c) Mac OS

Mac OS is the operating system designed for the Apple range of personal computers, the Macintosh. It was first

released in 1984 with the original Macintosh computer and was the first OS to incorporate GUI. In fact, incontrast to the other operating systems available at the time which used Command Line Interface (CLI), Mac OS

was a pure GUI as it had no CLI at all. The philosophy behind this approach to operating system design was to

make a system that was user friendly and intuitive where MS DOS and UNIX appeared complicated andchallenging to use in comparison.

Mac OS was originally very hardware-specific, only running on Apple computers using Motorola 68,000

processors. When Apple started building computers using PowerPC processors and hardware, Mac OS was

updated to run on these machines. All the versions of Mac OS were pure GUIs. The release of OS X (or Mac OS

10) was a significant change in the development of Apple operating systems. OS X was built on UNIX technology

and introduced better memory management and multitasking capabilities in the OS. It also introduced a CLI for

the first time.

(d) MS DOS

Microsoft Disk Operating System (MS DOS) is a single-user task operating system built by Microsoft. It was

the most commonly used operating system for PC in the 1980s and Microsoft’s first commercialized operating

system. It was the same operating system that Microsoft developed for IBM’s personal computer as a Personal

Computer Disk Operating System (PC DOS) and was based on the Intel 8086 family of microprocessors. MS

DOS uses CLI that requires knowledge of a large number of commands. With GUI based operating system

becoming popular, MS DOS lost its appeal quickly though it was the underlying basic operating system on

which early versions of GUI based Windows operating system ran. Even today you will find that Windows

Operating Systems continue to use and support MS DOS within a Windows environment. MS DOS was

initially released in 1981 and till now eight versions of it have been released. Today, Microsoft have stopped

paying much attention to it and is focusing primarily on the GUI based Windows Operating Systems.

(e) Windows

I. Windows 3.x

The first version of Windows 3.0 was released by Microsoft in 1990. It was a graphical interface-based package

and not a complete operating system, because it required DOS to be installed first on the computer and only after

that it could be loaded and used. With the launch of Windows 3.11, huge improvements in terms of usability and

performance were seen because the user did not have to remember complex DOS commands, work on a single

application at a time or suffer from the limited use of input devices such as a mouse or trackball.

Some of the prominent features of Windows 3.0 and 3.11 are; it supported GUI where programs could be

executed just by double clicking on them and most of the system settings could be modified from one point called

the Control Panel; it could perform most of the DOS housekeeping commands, such as creating, renaming and

deleting directories, copying, moving, renaming, deleting files, formatting disks, etc.; run multiple programs in

different Windows; interchange data within different applications using a utility called clipboard; support for more

options such as fax, drawings, graphical Internet browsing; work on mixed text and graphical documents, etc.

Most of the DOS applications could be executed from within the Windows environment and graphical interface

was extended to those applications which were designed according to Windows.

II. Windows 95

Windows 95 was a graphical user interface released by Microsoft Corporation in 1995. It had significantimprovements over the earlier version of an operating system distributed by Microsoft under the name of Windows3.11. In addition to the complete change in the user interface, there were a number of important internal modifications

Page 70: Unit 1

70 Computer Organization and Operating Systems

made to the core of the operating system. Windows 95, also known as Windows version 4.0 during its developmentphase, was one of the most successful operating systems of that time. Windows 95 operated independently ofMS DOS rather than in conjunction with it and reduced the use of MS DOS to only a boot loader for Windows 95.

III. Windows 98

Microsoft released the next version of Windows in 1998. Like its predecessor, Windows 98 supported a hybrid 16/

32 bit file access system and better graphical user interface. It is often referred to as an operating system that

‘Works Better, Plays Better’. Code named ‘Memphis’ during its development stage, Windows 98 integrated

Internet Explorer into the user’s desktop to allow its users get a global view of technologies over the World Wide

Web and enable easy access to it.

IV. Windows 98 SE

The Windows 98 SE (Second Edition) is an improved and enhanced version of Windows 98. It includes new versions

of Microsoft applications as compared to Windows 98, to improve user experience and stability of the operating system.

Some of the new or improved elements of this operating system are: inclusion of Internet connection sharing, Windows

Driver Model (WDM) for Modems, Wake on LAN, Internet Explorer 5.0, integrated support for DVD-ROM drivers,

bug free Windows, Microsoft Plus!, support for Web TV and updates for other Microsoft programs such as NetMeeting,

MSN, Microsoft Wallet, Windows Media Player, etc.

V. Windows Millennium Edition

Windows ME (Millenium Edition) was released on 14 September, 2000 and targeted especially at home PC users.

This OS was in continuation to Windows 98 with restricted access to real mode MS DOS shell to improve

functionality. Among other changes Windows ME incorporated was an improved look and feel to the user interface

and a system restore option of going back to a previous state of the machine. The key features of this operating

system were that it had upgraded version of Microsoft products such as Internet Explorer 5.5, Windows Media

Player 7, System restore options, applications to easily connect with digital cameras and scanners, Windows

Movie, Improved Generic support for USB interface and shell extension of ZIP files into the Windows Explorer.

VI. Windows NT

Microsoft released this version of Windows in 1993. It increased ease of use and simplified management. It used

the Windows 95 interface and included advanced network support, trouble-free and better access to the Internet

and corporate intranets. With the intent of designing it as an operating system capable of supporting high level

language and at the same time processor independent and supporting a multiuser and multiprocessing environment,

Windows NT had high acceptance in both the home user and professional user markets. Various versions of

Windows NT were released over the years, starting from Windows NT 3.1 in 1993 to Windows NT 4.0 in 1996,

after which product development was stopped by Microsoft.

VII. Windows 2000

Microsoft released this version of Windows in 2000. It was an upgrade from Windows NT 4.0 and designed with

the aim of replacing Windows 95, Windows 98 and Windows NT on all business desktops and laptops. This

version was easy to use, Internet compatible and supported mobile computing. It made hardware installation

much easier by including support for a range of new Plug and Play devices, including advance networking and

wireless products, infrared and USB devices. The main features of Microsoft Windows 2000 were: Dump

Capabilities, wherein the operating system gave its users the option of dumping either a part of the memory or the

entire contents into a file on the hard drive which helped in saving critical information in case of a system failure,

Microsoft Management Control to control the access to administrative tools and system settings, Recovery

Console and support to distributed file system.

Page 71: Unit 1

Computer Organization and Operating Systems 71

VIII. Windows 2003

Windows 2003 was released by Microsoft on 24 April 2003. This OS was designed and developed over various

functional parts of Windows 2000 and Windows XP. It boasted better stability, compatibility and security than

Windows 2000 and XP. It improved performance of the system by taking advantage of the recent hardware

developments, redesigning the system interface and developing better services. Major updates of Microsoft in-

house applications and services such as Networking, Web Server, Compatibility with Windows NT, etc., were

released with this OS.

The main features of this OS are: support for 64-bit processors, Internet Information Services V6.0,

separate Web Edition of Windows 2003 specially designed as a web server, tighter security measures over

previous versions of Windows using built-in firewall, support of a hardware based monitoring system called

‘watchdog timer’ which could monitor the server for hang-ups and freezes, virtual disk services for offsite

storage and support for multiple roles such as that of a Web server, print server and storage server.

IX. Windows XP

Windows XP was first released on 25 October 2001 and since then over 600 million copies have been sold

worldwide. It is a successor to both Windows 2000 and Windows ME and the first OS aimed at home users built

on Windows NT kernel and architecture. Due to the integration of multiple technologies from various operating

systems it gained wide popularity among home and business desktop, notebooks and media centre users. As

acknowledged by most Windows XP users as well as Microsoft Corporation, this version of Windows is the most

stable and efficient OS released by Microsoft yet.

X. Windows Vista

The most recent in the line of Microsoft Windows personal computer operating system, Windows Vista codename

‘Longhorn’ was developed to succeed Windows XP and an improvement upon the security. Microsoft started the

development of Windows Vista five months after releasing Windows XP and the work continued till November,

2006 when Microsoft announced its completion, ending the longest development cycle of an operating system.

Since the original idea of building Longhorn from Windows XP’s code was scraped and it was built on Windows

2003 SP1. Several developments included all new graphical interface named Windows Aero, refined and faster

search capabilities, an array of new tools such as Windows DVD Maker, integrated Windows Media Centre in

the Vista Home Premium and Vista Ultimate Editions, print, audio, display subsystems and redesigned networking.

XI. Windows CE

The Windows Embedded Compact (CE) is an operating system optimized for devices with minimum hardware

resources, such as embedded devices and handhelds. It integrates advance and reliable real time capabilities with

Windows technology. The kernel of this OS is not just a trimmed down version of desktop Windows, but in fact it

is a brand new kernel which can run on less than a megabyte of memory. Besides the advantage of performing

on a minimum specification it is also an OS which satisfies the prerequisites of a real time operating system.

Another distinct feature of Windows CE is that it was made available in a source code form to several hardware

manufactures so that they could modify the OS to adjust with their hardware and also to the general public. Since

Windows CE was developed as a component based and embedded operating system, it has been used as a basis

in the development of several mobile operating systems such as AutoPC, PocketPC, Windows Mobile, Smartphone,

etc., and also embedded into games consoles such as Microsoft Xbox.

9.3 OS Installation

How to Install an Operating System on a Computer

1. Back up all user files if you are planning to perform a system recovery. Buy a large stack of CDs or

multiple flash drives. This is the best way to back up your hard drive, especially if you have got a large

Page 72: Unit 1

72 Computer Organization and Operating Systems

quantity of files which take up a lot of space on the drive. The flash drives are particularly useful if you are

unable to burn files to disk. You can use the flash drives to transfer files to another computer, and then burn

them to CD. You also might want to try zip drives. Other alternatives include connecting the computer to

another computer using a null modem cable or a networking USB cable, then downloading all user files to

the second computer. Also, an external hard drive works very well as a backup drive, as does a tape drive.

This step is primarily for user files which you and other members of your household created. Save important

information such as program activation codes, your internet access information and usernames and

passwords.

2. Gather together all the installation CDs which came with your computer and with the devices you have

added to your computer. This may include your printer, modem, router, access point, disk drives, graphics

card, sound card and any other devices which have their own separate installation software. Create a boot

disk for your chosen operating system to guard against system failure.

3. Ensure that your computer is set up to boot from the CD ROM drive. This is done in the system BIOS.

Most computers access the BIOS by selecting a function key on boot up. Watch the splash screen as your

computer boots and select the function key which your screen instructs to get into setup. In the boot menu

of the BIOS setup, select the CD ROM drive as the first choice for booting the computer.

4. Place the first CD for the operating system in the CD drive. This will either be the first CD of a set of

recovery CDs or a standalone version of the operating system on CD.

5. Turn the computer off. Wait a full 30 seconds before turning it back on again. Turn the computer back on,

and follow the instructions on the screen to partition and format your hard drive. This will erase all data

which previously existed on the drive.

6. Follow the instructions which appear on the screen.

7. Reconnect to the internet. Troubleshoot any problems with your installation by visiting the software provider’s

support website. Look in particular for online knowledge bases. These contain valuable information which

can help you to overcome even the most difficult installation problems. Use several different search terms

to search for the solutions you need.

8. Install or reinstall your antivirus software and promptly download and install all the definition updates. If

possible do this without opening a web browser.

9. Navigate immediately to the update site of the operating system’s manufacturer and download all critical

updates. Set up automatic updates if possible. Do not forget to upgrade the browser to the latest version,

as old browsers often have security vulnerabilities which are susceptible to viruses and hackers. Avoid

beta versions of web browsers.

10. Install the software for any additional devices which did not come with your computer. Update the drivers

for all of your devices. Check device manager to see which version of each driver you are using and check

the date.

For uninstalling the operating system, start the PC in safe mode by pressing F8 on starting of the PC and

follow the instructions. For closing the operating system, shut down the PC.

9.4 Microsoft Disk Operating System

(a) Introduction of MS DOS

Microsoft Disk Operating System (MS DOS) is a single user, single tasking operating system. DOS has a

command line, text-based/non-graphical user interface commonly referred to as Character-based User Interface

(CUI). When the computer is switched on, a small program checks all internal devices, electronic memory and

peripherals. Once this process is completed, MS DOS is loaded.

Page 73: Unit 1

Computer Organization and Operating Systems 73

(b) DOS Prompt

The DOS prompt known as the command prompt looks like C:\> or D:\> where ‘C’,‘D’ represent the hard drives

of the computer system. All commands are typed at the DOS prompt. Enter key is pressed to view the output of

the typed command. If the command is correctly typed desired output would be displayed. Otherwise an error

message (Bad command or filename/Invalid parameter) is displayed on the screen.

(c) Internal DOS Commands

Command Syntax Explanation Example Notes

DATE DATE Displays the system’s current date and

prompts to enter the new date.

C:\>DATE The current date is: Fri 05/09/2003

Enter the new date: <mm-dd-yy>:

TIME TIME Displays the current time and prompts

the user to enter the new time.

C:\>TIME The current time is: 12:55:25.34

Enter the new time:

VER VER Displays the windows version. C:\>VER Displays the Windows version installed on your computer.

PROMPT PROMPT [Text] Changes the MS DOS command prompt

to the specified text. If the command is

typed without any parameters then the

default prompt setting is restored.

D:\>PROMPT Changes the prompt to the default

setting.

COPY COPY <Source> <Destination>

Creates a copy of the specified file and

places it in the specified location, file

will exist at the specified location as

well as the source location.

C:\DATA>

MOVE

HELLO.TXT

\LETTER

Creates a copy of HELLO.TXT in

the LETTER folder of the C: drive.

REN REN <Path>

<01dfile> <Newfile>

Renames the old file name by the

specified new file name.

C:\DATA>REN

HELLO.TXT

Hi.TXT

Renames ‘HELLO.TXT’ as ‘Hi.TXT’

DFL /FRASF DEL

<Path><Filename>

Deletes the specified file present in the

Specified path/location from the hard disk.

C:\DATA>DEL

Hi.TXT

Deletes the file ‘Hi.TXT’ located in

the ‘DATA’ folder of the C: drive.

TYPE TYPE <Filename> Displays the contents of a text file. C:\DATA>TYPE

TMP.TXT

Displays the contents of TMP.TXT.

DIR DHKDrive/

Directory -

Name Name>

Displays all the sub-directories and files

of the specified drive/directory. It also

shows the size of the files and the date

and time they were last modified.

C:\>DIRD: Displays all the contents (files and

directories) of the D: drive.

DIR/P DHKDrive/ Directory -Name>/P

Displays the contents of directory one

screen at a time and pauses until any

other key is pressed to continue the

display.

C:\>DIR

DATA/P

Displays the contents of the ‘DATA’ directory by pausing the screen.

DIR/W DIR <Drive/

Directory>/W

Displays the contents of the directory

width-wise. It omits file size, date and

time of creation of file so that more files

can be displayed at one time on the

screen.

C:\>DIR

DATA/W

Displays the contents of the ‘DATA’

directory width-wise.

DIR/W/P DIR <Drive/

Directory>/W/P

The Wide and Pause display option can

be combined. C:\>DIR

DATA/W/P

Displays the contents of the ‘DATA’ directory width-wise and by pausing the screen.

CD CD<Directory-

Name> CD\ -

Directly takes to the

root directory.

Displays the name of the current

directory if no parameter is specified

with the command. Changes the current

directory to the specified directory.

C:\>CDDATA\

SUBDATA

Changes the current directory to ‘DATA\SUBDATA’.

MD MD <Drive/

Directory-Name>

Creates a new directory in the specified location.

C:\>MD

‘HELLO’

Creates a directory named ‘HELLO’

in the C: Drive.

RD RD

<Directory-

Name>

Removes the specified directory. C:>RD HELLO To remove a directory first you should come to one level above the current directory and then remove command should be given. This

command will delete the ‘HELLO’ directory from the C:drive.

DELTREE DELTREE

<Directory-Name>

Deletes a directory and all the sub-

directories and files in it.

C:\>DELTREE

TEMP

Prompts the user for confirmation. If

user selects ‘Y’ (Yes) then the

directory ‘TEMP’ and all its sub-

directories will be deleted.

Page 74: Unit 1

74 Computer Organization and Operating Systems

I. Wild Cards

Wild card characters can be used in specifying filenames in DOS. There are two types of wild cards (?,*):

? It is used to represent any single character in the file name.

SYNTAX: C:\ DIR BA?.TXT

Displays all the text files in the C: drive starting with ‘BA’ and ending with any single character.

Examples: BAT.TXT, BAG.TXT, BAR.TXT, BAD.TXT, etc.

* It is used to represent one or more characters in a file name.

SYNTAX: C:\ DIR CON*.TXT

Displays all the text files in the C: drive starting with ‘CON’.

Examples: CONCEPT.TXT, CONCATENATE.TXT, CONTEMPT.TXT, CONSOLE.TXT, etc.

(d) External DOS Commands

Command Syntax Explanation Example Notes

LABEL LABEL Makes, changes, or deletes the label of volume of a disk.

C:\>LABEL Displays the current volume label

and volume serial number. Also

prompts to enter a new label.

EDIT EDIT Starts MS DOS editor, which produces

and changes ASCII files.

C:\>EDIT Opens the MS DOS editor.

ATTRIB + Clears an

attribute A-Archive

attribute R-Read

only attribute H-

Hiddenfile attribute

S-System file

attribute

Displays or changes file attributes. C:\>ATTPJB+H +R

FIRST.TXT

Sets the attributes of ‘FIRST.TXT’ as

Read only and hidden.

XCOPY XCOPY <Source>

<destination>

Copies files and subdirectories to the

specified location.

C:\>XCOPY

C:\DATA

C:\INFO

Copies the entire contents of the

‘DATA’folder to ‘INFO’folder. If the

‘DATA’ folder contains ‘ami’

subdirectories, then they will also be

copied to the 'INFO' folder.

TREE TREE

[Drive:][Path]

[/F][/A]

Displays directory paths and files in each subdirectory. /F – Displays file names in each listed directory. /A – Specifies the alternative characters (plus signs, hyphens, etc.) used to draw.

TREE C:

TREE D:

Lists a tree listing of the C drive.

List a tree listing of the D drive.

(e) Limitations of MS DOS

I. It has a text based user interface where the commands have to be typed for each operation that the user

wants to perform. The user is expected to remember the commands as well as their syntax.

II. It is a single user, single task operating system and the working is limited to one megabyte of memory. 640

kilobytes of the memory is used for the application program.

III. It does not allow using long file names. The user is restricted to eight-character file names with three-

character extensions.