Transcript
Page 1: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Boolean Algebra is used to analyze and simplify the digital (logic) circuits. It uses only the binary numbers i.e. 0 and 1. It is also called as Binary Algebra or logical Algebra. Boolean algebra was invented by George Boole in 1854.

Rule in Boolean Algebra

Following are the important rules used in Boolean algebra.

Variable used can have only two values. Binary 1 for HIGH and Binary 0 for LOW.

Complement of a variable is represented by an overbar (-). Thus, complement of

variable B is represented as  . Thus if B = 0 then   = 1 and B = 1 then   = 0.

ORing of the variables is represented by a plus (+) sign between them. For example ORing of A, B, C is represented as A + B + C.

Logical ANDing of the two or more variable is represented by writing a dot between them such as A.B.C. Sometime the dot may be omitted like ABC.

Truth Tables for the Laws of Boolean

BooleanExpression

DescriptionEquivalentSwitching Circuit

Boolean AlgebraLaw or Rule

A + 1 = 1A in parallel withclosed = "CLOSED"

Annulment

A + 0 = AA in parallel withopen = "A"

Identity

A . 1 = AA in series withclosed = "A"

Identity

A . 0 = 0A in series withopen = "OPEN"

Annulment

Compiled By : Ms. Nandini Sharma Page 1

Page 2: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

A + A = AA in parallel withA = "A"

Idempotent

A . A = AA in series withA = "A"

Idempotent

NOT A = ANOT NOT A(double negative) = "A"

  Double Negation

A + A = 1A in parallel withNOT A = "CLOSED"

Complement

A . A = 0A in series withNOT A = "OPEN"

Complement

A+B = B+AA in parallel with B =B in parallel with A

Commutative

A.B = B.AA in series with B =B in series with A

Commutative

A+B = A.Binvert and replace OR with AND

  de Morgan’s Theorem

A.B = A+Binvert and replace AND with OR

  de Morgan’s Theorem

The basic Laws of Boolean Algebra that relate to the Commutative Law allowing a change in position for addition and multiplication, the Associative Law allowing the removal of brackets for addition and multiplication, as well as the Distributive Law allowing the factoring of an expression, are the same as in ordinary algebra.

Compiled By : Ms. Nandini Sharma Page 2

Page 3: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Each of the Boolean Laws above are given with just a single or two variables, but the number of variables defined by a single law is not limited to this as there can be an infinite number of variables as inputs too the expression. These Boolean laws detailed above can be used to prove any given Boolean expression as well as for simplifying complicated digital circuits.

A brief description of the various Laws of Boolean are given below with A representing a variable input.

Description of the Laws of Boolean Algebra

Annulment Law  – A term AND´ed with a “0” equals 0 or OR´ed with a “1” will equal 1.

o A . 0 = 0    A variable AND’ed with 0 is always equal to 0.

o A + 1 = 1    A variable OR’ed with 1 is always equal to 1. 

Identity Law  – A term OR´ed with a “0” or AND´ed with a “1” will always equal that term.

o A + 0 = A   A variable OR’ed with 0 is always equal to the variable.

o A . 1 = A    A variable AND’ed with 1 is always equal to the variable.

Idempotent Law  – An input that is AND´ed or OR´ed with itself is equal to that input.

o A + A = A    A variable OR’ed with itself is always equal to the variable.

o A . A = A    A variable AND’ed with itself is always equal to the variable. 

Complement Law  – A term AND´ed with its complement equals “0” and a term OR´ed with its complement equals “1”. 

o A . A = 0    A variable AND’ed with its complement is always equal to 0.

o A + A = 1    A variable OR’ed with its complement is always equal to 1.

Commutative Law  – The order of application of two separate terms is not important.

o A . B = B . A    The order in which two variables are AND’ed makes no difference.

o A + B = B + A    The order in which two variables are OR’ed makes no difference. 

Double Negation Law  – A term that is inverted twice is equal to the original term. 

o A = A     A double complement of a variable is always equal to the variable.

Compiled By : Ms. Nandini Sharma Page 3

Page 4: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

de Morgan´s Theorem  – There are two “de Morgan´s” rules or theorems,

(1) Two separate terms NOR´ed together is the same as the two terms inverted (Complement) and AND´ed for example, A+B = A. B.

(2) Two separate terms NAND´ed together is the same as the two terms inverted (Complement) and OR´ed for example, A.B = A +B.

 

Other algebraic Laws of Boolean not detailed above include:

Distributive Law  – This law permits the multiplying or factoring out of an expression.

o A(B + C) = A.B + A.C    (OR Distributive Law)

o A + (B.C) = (A + B).(A + C)    (AND Distributive Law)

Absorptive Law  – This law enables a reduction in a complicated expression to a simpler one by absorbing like terms.

o A + (A.B) = A    (OR Absorption Law)

o A(A + B) = A    (AND Absorption Law) 

Associative Law  – This law allows the removal of brackets from an expression and regrouping of the variables.

o A + (B + C) = (A + B) + C = A + B + C    (OR Associate Law)

o A(B.C) = (A.B)C = A . B . C    (AND Associate Law)

Boolean Algebra Functions

Using the information above, simple 2-input AND, OR and NOT Gates can be represented by 16 possible functions as shown in the following table.

Function Description Expression

1. NULL 0

2. IDENTITY 1

3. Input A A

Compiled By : Ms. Nandini Sharma Page 4

Page 5: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

4. Input B B

5. NOT A A

6. NOT B B

7. A AND B (AND) A . B

8. A AND NOT B A . B

9. NOT A AND B A . B

10. NOT AND (NAND) A . B

11. A OR B (OR) A + B

12. A OR NOT B A + B

13. NOT A OR B A + B

14. NOT OR (NOR) A + B

15. Exclusive-OR A . B + A . B

16. Exclusive-NOR A . B + A . B

Laws of Boolean Algebra Example No1

Using the above laws, simplify the following expression:  (A + B)(A + C)

Q = (A + B).(A + C)  

Compiled By : Ms. Nandini Sharma Page 5

Page 6: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

  A.A + A.C + A.B + B.C – Distributive law

  A + A.C + A.B + B.C – Idempotent AND law (A.A = A)

  A(1 + C) + A.B + B.C – Distributive law

  A.1 + A.B + B.C – Identity OR law (1 + C = 1)

  A(1 + B) + B.C – Distributive law

  A.1 + B.C – Identity OR law (1 + B = 1)

Q = A + (B.C) – Identity AND law (A.1 = A)

 

Then the expression:  (A + B)(A + C) can be simplified to A + (B.C) as in the Distributive law.

Logic gates are the basic building blocks of any digital system. It is an electronic circuit having one or more than one input and only one output. The relationship between the input and the output is based on a certain logic. Based on this, logic gates are named as AND gate, OR gate, NOT gate etc.

AND Gate

A circuit which performs an AND operation is shown in figure. It has n input (n >= 2) and one output.

Logic diagram

Truth Table

Compiled By : Ms. Nandini Sharma Page 6

Page 7: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

OR Gate

A circuit which performs an OR operation is shown in figure. It has n input (n >= 2) and one output.

Logic diagram

Truth Table

NOT Gate

NOT gate is also known as Inverter. It has one input A and one output Y.

Logic diagram

Compiled By : Ms. Nandini Sharma Page 7

Page 8: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Truth Table

NAND Gate

A NOT-AND operation is known as NAND operation. It has n input (n >= 2) and one output.

Logic diagram

Truth Table

NOR Gate

A NOT-OR operation is known as NOR operation. It has n input (n >= 2) and one output.

Compiled By : Ms. Nandini Sharma Page 8

Page 9: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Logic diagram

Truth Table

XOR Gate

XOR or Ex-OR gate is a special type of gate. It can be used in the half adder, full adder and subtractor. The exclusive-OR gate is abbreviated as EX-OR gate or sometime as X-OR gate. It has n input (n >= 2) and one output.

Logic diagram

Truth Table

Compiled By : Ms. Nandini Sharma Page 9

Page 10: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

XNOR Gate

XNOR gate is a special type of gate. It can be used in the half adder, full adder and subtractor. The exclusive-NOR gate is abbreviated as EX-NOR gate or sometime as X-NOR gate. It has n input (n >= 2) and one output.

Logic diagram

Truth Table

Combinational circuit is a circuit in which we combine the different gates in the circuit, for example encoder, decoder, multiplexer and demultiplexer. Some of the characteristics of combinational circuits are following −

The output of combinational circuit at any instant of time, depends only on the levels present at input terminals.

The combinational circuit do not use any memory. The previous state of input does not have any effect on the present state of the circuit.

A combinational circuit can have an n number of inputs and m number of outputs.

Block diagram

Compiled By : Ms. Nandini Sharma Page 10

Page 11: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

We're going to elaborate few important combinational circuits as follows.

Half Adder

Half adder is a combinational logic circuit with two inputs and two outputs. The half adder circuit is designed to add two single bit binary number A and B. It is the basic building block for addition of two single bit numbers. This circuit has two outputs carry and sum.

Block diagram

Truth Table

Circuit Diagram

Compiled By : Ms. Nandini Sharma Page 11

Page 12: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Full Adder

Full adder is developed to overcome the drawback of Half Adder circuit. It can add two one-bit numbers A and B, and carry c. The full adder is a three input and two output combinational circuit.

Block diagram

Truth Table

Circuit Diagram

Compiled By : Ms. Nandini Sharma Page 12

Page 13: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

N-Bit Parallel Adder

The Full Adder is capable of adding only two single digit binary number along with a carry input. But in practical we need to add binary numbers which are much longer than just one bit. To add two n-bit binary numbers we need to use the n-bit parallel adder. It uses a number of full adders in cascade. The carry output of the previous full adder is connected to carry input of the next full adder.

4 Bit Parallel Adder

In the block diagram, A0 and B0 represent the LSB of the four bit words A and B. Hence Full Adder-0 is the lowest stage. Hence its Cin has been permanently made 0. The rest of the connections are exactly same as those of n-bit parallel adder is shown in fig. The four bit parallel adder is a very common logic circuit.

Block diagram

N-Bit Parallel Subtractor

Compiled By : Ms. Nandini Sharma Page 13

Page 14: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

The subtraction can be carried out by taking the 1's or 2's complement of the number to be subtracted. For example we can perform the subtraction (A-B) by adding either 1's or 2's complement of B to A. That means we can use a binary adder to perform the binary subtraction.

4 Bit Parallel Subtractor

The number to be subtracted (B) is first passed through inverters to obtain its 1's complement. The 4-bit adder then adds A and 2's complement of B to produce the subtraction. S3 S2 S1 S0 represents the result of binary subtraction (A-B) and carry output Cout represents the polarity of the result. If A > B then Cout = 0 and the result of binary form (A-B) then Cout = 1 and the result is in the 2's complement form.

Block diagram

Half Subtractors

Half subtractor is a combination circuit with two inputs and two outputs (difference and borrow). It produces the difference between the two binary bits at the input and also produces an output (Borrow) to indicate if a 1 has been borrowed. In the subtraction (A-B), A is called as Minuend bit and B is called as Subtrahend bit.

Truth Table

Compiled By : Ms. Nandini Sharma Page 14

Page 15: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Circuit Diagram

Full Subtractors

The disadvantage of a half subtractor is overcome by full subtractor. The full subtractor is a combinational circuit with three inputs A,B,C and two output D and C'. A is the 'minuend', B is 'subtrahend', C is the 'borrow' produced by the previous stage, D is the difference output and C' is the borrow output.

Truth Table

Circuit Diagram

Compiled By : Ms. Nandini Sharma Page 15

Page 16: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Multiplexers

Multiplexer is a special type of combinational circuit. There are n-data inputs, one output and m select inputs with 2m = n. It is a digital circuit which selects one of the n data inputs and routes it to the output. The selection of one of the n inputs is done by the selected inputs. Depending on the digital code applied at the selected inputs, one out of n data sources is selected and transmitted to the single output Y. E is called the strobe or enable input which is useful for the cascading. It is generally an active low terminal that means it will perform the required operation when it is low.

Block diagram

Compiled By : Ms. Nandini Sharma Page 16

Page 17: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Multiplexers come in multiple variations

2 : 1 multiplexer

4 : 1 multiplexer

16 : 1 multiplexer

32 : 1 multiplexer

Block Diagram

Truth Table

Compiled By : Ms. Nandini Sharma Page 17

Page 18: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Demultiplexers

A demultiplexer performs the reverse operation of a multiplexer i.e. it receives one input and distributes it over several outputs. It has only one input, n outputs, m select input. At a time only one output line is selected by the select lines and the input is transmitted to the selected output line. A de-multiplexer is equivalent to a single pole multiple way switch as shown in fig.

Demultiplexers comes in multiple variations.

1 : 2 demultiplexer

1 : 4 demultiplexer

1 : 16 demultiplexer

1 : 32 demultiplexer

Block diagram

Truth Table

Compiled By : Ms. Nandini Sharma Page 18

Page 19: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Decoder

A decoder is a combinational circuit. It has n input and to a maximum m = 2n outputs. Decoder is identical to a demultiplexer without any data input. It performs operations which are exactly opposite to those of an encoder.

Block diagram

Examples of Decoders are following.

Code converters

BCD to seven segment decoders

Nixie tube decoders

Relay actuator

2 to 4 Line Decoder

The block diagram of 2 to 4 line decoder is shown in the fig. A and B are the two inputs where D through D are the four outputs. Truth table explains the operations of a decoder. It shows that each output is 1 for only a specific combination of inputs.

Block diagram

Compiled By : Ms. Nandini Sharma Page 19

Page 20: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Truth Table

Logic Circuit

Encoder

Compiled By : Ms. Nandini Sharma Page 20

Page 21: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Encoder is a combinational circuit which is designed to perform the inverse operation of the decoder. An encoder has n number of input lines and m number of output lines. An encoder produces an m bit binary code corresponding to the digital input number. The encoder accepts an n input digital word and converts it into an m bit another digital word.

Block diagram

Examples of Encoders are following.

Priority encoders

Decimal to BCD encoder

Octal to binary encoder

Hexadecimal to binary encoder

Priority Encoder

This is a special type of encoder. Priority is given to the input lines. If two or more input line are 1 at the same time, then the input line with highest priority will be considered. There are four input D0, D1, D2, D3 and two output Y0, Y1. Out of the four input D3 has the highest priority and D0 has the lowest priority. That means if D3 = 1 then Y1 Y1 = 11 irrespective of the other inputs. Similarly if D3 = 0 and D2 = 1 then Y1 Y0 = 10 irrespective of the other inputs.

Block diagram

Truth Table

Compiled By : Ms. Nandini Sharma Page 21

Page 22: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Logic Circuit

The combinational circuit does not use any memory. Hence the previous state of input does not have any effect on the present state of the circuit. But sequential circuit has memory so output can vary based on input. This type of circuits uses previous input, output, clock and a memory element.

Block diagram

Compiled By : Ms. Nandini Sharma Page 22

Page 23: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Flip Flop

Flip flop is a sequential circuit which generally samples its inputs and changes its outputs only at particular instants of time and not continuously. Flip flop is said to be edge sensitive or edge triggered rather than being level triggered like latches.

S-R Flip Flop

It is basically S-R latch using NAND gates with an additional enable input. It is also called as level triggered SR-FF. For this, circuit in output will take place if and only if the enable input (E) is made active. In short this circuit will operate as an S-R latch if E = 1 but there is no change in the output if E = 0.

Block Diagram

Circuit Diagram

Compiled By : Ms. Nandini Sharma Page 23

Page 24: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Truth Table

Operation

S.N. Condition Operation

1 S = R = 0 : No change If S = R = 0 then output of NAND gates 3 and 4 are forced to become 1.

Hence R' and S' both will be equal to 1. Since S' and R' are the input of the basic S-R latch using NAND gates, there will be no change in the state of outputs.

2 S = 0, R = 1, E = 1 Since S = 0, output of NAND-3 i.e. R' = 1 and E = 1 the output of NAND-4 i.e. S' = 0.

Hence Qn+1 = 0 and Qn+1 bar = 1. This is reset condition.

3 S = 1, R = 0, E = 1 Output of NAND-3 i.e. R' = 0 and output of NAND-4 i.e. S' = 1.

Hence output of S-R NAND latch is Qn+1 = 1 and Qn+1 bar = 0.

Compiled By : Ms. Nandini Sharma Page 24

Page 25: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

This is the reset condition.

4 S = 1, R = 1, E = 1 As S = 1, R = 1 and E = 1, the output of NAND gates 3 and 4 both are 0 i.e. S' = R' = 0.

Hence the Race condition will occur in the basic NAND latch.

Master Slave JK Flip Flop

Master slave JK FF is a cascade of two S-R FF with feedback from the output of second to input of first. Master is a positive level triggered. But due to the presence of the inverter in the clock line, the slave will respond to the negative level. Hence when the clock = 1 (positive level) the master is active and the slave is inactive. Whereas when clock = 0 (low level) the slave is active and master is inactive.

Circuit Diagram

Truth Table

Operation

Compiled By : Ms. Nandini Sharma Page 25

Page 26: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

S.N. Condition Operation

1 J = K = 0 (No change) When clock = 0, the slave becomes active and master is inactive. But since the S and R inputs have not changed, the slave outputs will also remain unchanged. Therefore outputs will not change if J = K =0.

2 J = 0 and K = 1 (Reset) Clock = 1 − Master active, slave inactive. Therefore outputs of the master become Q1 = 0 and Q1 bar = 1. That means S = 0 and R =1.

Clock = 0 − Slave active, master inactive. Therefore outputs of the slave become Q = 0 and Q bar = 1.

Again clock = 1 − Master active, slave inactive. Therefore even with the changed outputs Q = 0 and Q bar = 1 fed back to master, its output will be Q1 = 0 and Q1 bar = 1. That means S = 0 and R = 1.

Hence with clock = 0 and slave becoming active the outputs of slave will remain Q = 0 and Q bar = 1. Thus we get a stable output from the Master slave.

3 J = 1 and K = 0 (Set) Clock = 1 − Master active, slave inactive. Therefore outputs of the master become Q1 = 1 and Q1 bar = 0. That means S = 1 and R =0.

Clock = 0 − Slave active, master inactive. Therefore outputs of the slave become Q = 1 and Q bar = 0.

Again clock = 1 − then it can be shown that the outputs of the slave are stabilized to Q = 1 and Q bar = 0.

4 J = K = 1 (Toggle) Clock = 1 − Master active, slave inactive. Outputs of master will toggle. So S and R also will be inverted.

Clock = 0 − Slave active, master inactive. Outputs of slave will toggle.

These changed output are returned back to the master inputs. But since clock = 0, the master is still inactive. So it does not respond to these changed outputs. This avoids the multiple toggling which leads to the race around condition. The master

Compiled By : Ms. Nandini Sharma Page 26

Page 27: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

slave flip flop will avoid the race around condition.

Delay Flip Flop / D Flip Flop

Delay Flip Flop or D Flip Flop is the simple gated S-R latch with a NAND inverter connected between S and R inputs. It has only one input. The input data is appearing at the output after some time. Due to this data delay between i/p and o/p, it is called delay flip flop. S and R will be the complements of each other due to NAND inverter. Hence S = R = 0 or S = R = 1, these input condition will never appear. This problem is avoid by SR = 00 and SR = 1 conditions.

Block Diagram

Circuit Diagram

Truth Table

Operation

S.N. Condition Operation

Compiled By : Ms. Nandini Sharma Page 27

Page 28: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

1 E = 0 Latch is disabled. Hence no change in output.

2 E = 1 and D = 0 If E = 1 and D = 0 then S = 0 and R = 1. Hence irrespective of the present state, the next state is Qn+1 = 0 and Qn+1 bar = 1. This is the reset condition.

3 E = 1 and D = 1 If E = 1 and D = 1, then S = 1 and R = 0. This will set the latch and Qn+1 = 1 and Qn+1 bar = 0 irrespective of the present state.

Toggle Flip Flop / T Flip Flop

Toggle flip flop is basically a JK flip flop with J and K terminals permanently connected together. It has only input denoted by T as shown in the Symbol Diagram. The symbol for positive edge triggered T flip flop is shown in the Block Diagram.

Symbol Diagram

Block Diagram

Truth Table

Compiled By : Ms. Nandini Sharma Page 28

Page 29: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Operation

S.N. Condition Operation

1 T = 0, J = K = 0 The output Q and Q bar won't change

2 T = 1, J = K = 1 Output will toggle corresponding to every leading edge of clock signal.

A memory is just like a human brain. It is used to store data and instruction. Computer memory is the storage space in computer where data is to be processed and instructions required for processing are stored.

The memory is divided into large number of small parts. Each part is called a cell. Each location or cell has a unique address which varies from zero to memory size minus one.

For example if computer has 64k words, then this memory unit has 64 * 1024 = 65536 memory location. The address of these locations varies from 0 to 65535.

Memory is primarily of two types

Internal Memory − cache memory and primary/main memory

External Memory − magnetic disk / optical disk etc.

Compiled By : Ms. Nandini Sharma Page 29

Page 30: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Characteristics of Memory Hierarchy are following when we go from top to bottom.

Capacity in terms of storage increases.

Cost per bit of storage decreases.

Frequency of access of the memory by the CPU decreases.

Access time by the CPU increases.

RAM

A RAM constitutes the internal memory of the CPU for storing data, program and program result. It is read/write memory. It is called random access memory (RAM).

Since access time in RAM is independent of the address to the word that is, each storage location inside the memory is as easy to reach as other location & takes the same amount of time. We can reach into the memory at random & extremely fast but can also be quite expensive.

RAM is volatile, i.e. data stored in it is lost when we switch off the computer or if there is a power failure. Hence, a backup uninterruptible power system (UPS) is often used with computers. RAM is small, both in terms of its physical size and in the amount of data it can hold.

Compiled By : Ms. Nandini Sharma Page 30

Page 31: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

RAM is of two types

Static RAM (SRAM)

Dynamic RAM (DRAM)

Static RAM (SRAM)

The word static indicates that the memory retains its contents as long as power remains applied. However, data is lost when the power gets down due to volatile nature. SRAM chips use a matrix of 6-transistors and no capacitors. Transistors do not require power to prevent leakage, so SRAM need not have to be refreshed on a regular basis.

Because of the extra space in the matrix, SRAM uses more chips than DRAM for the same amount of storage space, thus making the manufacturing costs higher.

Static RAM is used as cache memory needs to be very fast and small.

Dynamic RAM (DRAM)

DRAM, unlike SRAM, must be continually refreshed in order for it to maintain the data. This is done by placing the memory on a refresh circuit that rewrites the data several hundred times per second. DRAM is used for most system memory because it is cheap and small. All DRAMs are made up of memory cells. These cells are composed of one capacitor and one transistor.

ROM

ROM stands for Read Only Memory. The memory from which we can only read but cannot write on it. This type of memory is non-volatile. The information is stored permanently in such memories during manufacture.

A ROM, stores such instruction as are required to start computer when electricity is first turned on, this operation is referred to as bootstrap. ROM chip are not only used in the computer but also in other electronic items like washing machine and microwave oven.

Following are the various types of ROM −

MROM (Masked ROM)

The very first ROMs were hard-wired devices that contained a pre-programmed set of data or instructions. These kind of ROMs are known as masked ROMs. It is inexpensive ROM.

PROM (Programmable Read Only Memory)

PROM is read-only memory that can be modified only once by a user. The user buys a blank PROM and enters the desired contents using a PROM programmer. Inside the PROM chip there are small fuses which are burnt open during programming. It can be programmed only once and is not erasable.

Compiled By : Ms. Nandini Sharma Page 31

Page 32: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

EPROM (Erasable and Programmable Read Only Memory)

The EPROM can be erased by exposing it to ultra-violet light for a duration of upto 40 minutes. Usually, an EPROM eraser achieves this function. During programming an electrical charge is trapped in an insulated gate region. The charge is retained for more than ten years because the charge has no leakage path. For erasing this charge, ultra-violet light is passed through a quartz crystal window (lid). This exposure to ultra-violet light dissipates the charge. During normal use the quartz lid is sealed with a sticker.

EEPROM (Electrically Erasable and Programmable Read Only Memory)

The EEPROM is programmed and erased electrically. It can be erased and reprogrammed about ten thousand times. Both erasing and programming take about 4 to 10 ms (millisecond). In EEPROM, any location can be selectively erased and programmed. EEPROMs can be erased one byte at a time, rather than erasing the entire chip. Hence, the process of re-programming is flexible but slow.

Serial Access Memory

Sequential access means the system must search the storage device from the beginning of the memory address until it finds the required piece of data. Memory device which supports such access is called a Sequential Access Memory or Serial Access Memory. Magnetic tape is an example of serial access memory.

Direct Access Memory

Direct access memory or Random Access Memory, refers to conditions in which a system can go directly to the information that the user wants. Memory device which supports such access is called a Direct Access Memory. Magnetic disks, optical disks are examples of direct access memory.

Cache Memory

Cache memory is a very high speed semiconductor memory which can speed up CPU. It acts as a buffer between the CPU and main memory. It is used to hold those parts of data and program which are most frequently used by CPU. The parts of data and programs, are transferred from disk to cache memory by operating system, from where CPU can access them.

Advantages

Cache memory is faster than main memory.

It consumes less access time as compared to main memory.

It stores the program that can be executed within a short period of time.

It stores data for temporary use.

Compiled By : Ms. Nandini Sharma Page 32

Page 33: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Disadvantages

Cache memory has limited capacity.

It is very expensive.

Virtual memory is a technique that allows the execution of processes which are not completely available in memory. The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory is the separation of user logical memory from physical memory.

This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available. Following are the situations, when entire program is not required to be loaded fully in main memory.

User written error handling routines are used only when an error occurred in the data or computation.

Certain options and features of a program may be used rarely.

Many tables are assigned a fixed amount of address space even though only a small amount of the table is actually used.

The ability to execute a program that is only partially in memory would counter many benefits.

Less number of I/O would be needed to load or swap each user program into memory.

A program would no longer be constrained by the amount of physical memory that is available.

Each user program could take less physical memory, more programs could be run the same time, with a corresponding increase in CPU utilization and throughput.

Auxiliary Memory

Auxiliary memory is much larger in size than main memory but is slower. It normally stores system programs, instruction and data files. It is also known as secondary memory. It can also be used as an overflow/virtual memory in case the main memory capacity has been exceeded. Secondary memories cannot be accessed directly by a processor. First the data/information of auxiliary memory is transferred to the main memory and then that information can be accessed by the CPU. Characteristics of Auxiliary Memory are following −

Non-volatile memory − Data is not lost when power is cut off.

Reusable − The data stays in the secondary storage on permanent basis until it is not overwritten or deleted by the user.

Compiled By : Ms. Nandini Sharma Page 33

Page 34: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Reliable − Data in secondary storage is safe because of high physical stability of secondary storage device.

Convenience − With the help of a computer software, authorised people can locate and access the data quickly.

Capacity − Secondary storage can store large volumes of data in sets of multiple disks.

Cost − It is much lesser expensive to store data on a tape or disk than primary memory.

Microprocessing unit is synonymous to central processing unit, CPU used in traditional computer. Microprocessor (MPU) acts as a device or a group of devices which do the following tasks.

communicate with peripherals devices

provide timing signal

direct data flow

perform computer tasks as specified by the instructions in memory

8085 Microprocessor

The 8085 microprocessor is an 8-bit general purpose microprocessor which is capable to address 64k of memory. This processor has forty pins, requires +5 V single power supply and a 3-MHz single-phase clock.

Block Diagram

Compiled By : Ms. Nandini Sharma Page 34

Page 35: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

ALU

The ALU perform the computing function of microprocessor. It includes the accumulator, temporary register, arithmetic & logic circuit & and five flags. Result is stored in accumulator & flags.

Block Diagram

Accumulator

It is an 8-bit register that is part of ALU. This register is used to store 8-bit data & in performing arithmetic & logic operation. The result of operation is stored in accumulator.

Diagram

Compiled By : Ms. Nandini Sharma Page 35

Page 36: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Flags

Flags are programmable. They can be used to store and transfer the data from the registers by using instruction. The ALU includes five flip-flops that are set and reset according to data condition in accumulator and other registers.

S (Sign) flag − After the execution of an arithmetic operation, if bit D7of the result is 1, the sign flag is set. It is used to signed number. In a given byte, if D7 is 1 means negative number. If it is zero means it is a positive number.

Z (Zero) flag − The zero flag is set if ALU operation result is 0.

AC (Auxiliary Carry) flag − In arithmetic operation, when carry is generated by digit D3 and passed on to digit D4, the AC flag is set. This flag is used only internally BCD operation.

P (Parity) flag − After arithmetic or logic operation, if result has even number of 1s, the flag is set. If it has odd number of 1s, flag is reset.

C (Carry) flag − If arithmetic operation result is in a carry, the carry flag is set, otherwise it is reset.

Register section

It is basically a storage device and transfers data from registers by using instructions.

Stack Pointer (SP) − The stack pointer is also a 16-bit register which is used as a memory pointer. It points to a memory location in Read/Write memory known as stack. In between execution of program, sometime data to be stored in stack. The beginning of the stack is defined by loading a 16-bit address in the stack pointer.

Program Counter (PC) − This 16-bit register deals with fourth operation to sequence the execution of instruction. This register is also a memory pointer. Memory location have 16-bit address. It is used to store the execution address. The function of the program counter is to point to memory address from which next byte is to be fetched.

Storage registers − These registers store 8-bit data during a program execution. These registers are identified as B, C, D, E, H, L. They can be combined as register pair BC, DE and HL to perform some 16 bit operations.

Compiled By : Ms. Nandini Sharma Page 36

Page 37: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Time and Control Section

This unit is responsible to synchronize Microprocessor operation as per the clock pulse and to generate the control signals which are necessary for smooth communication between Microprocessor and peripherals devices. The RD bar and WR bar signals are synchronous pulses which indicates whether data is available on the data bus or not. The control unit is responsible to control the flow of data between microprocessor, memory and peripheral devices.

PIN diagram

All the signal can be classified into six groups

S.N. Group Description

1 Address bus The 8085 microprocessor has 8 signal line, A15 - A8 which are uni directional and used as a high order address bus.

2 Data bus The signal line AD7 - AD0 are bi-directional for dual purpose. They are used as low order

Compiled By : Ms. Nandini Sharma Page 37

Page 38: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

address bus as well as data bus.

3 Control signal and Status signal

Control Signal

RD bar − It is a read control signal (active low). If it is active then memory read the data.

WR bar − It is write control signal (active low). It is active when written into selected memory.

Status signal

ALU (Address Latch Enable) − When ALU is high. 8085 microprocessor use address bus. When ALU is low. 8085 microprocessor is use data bus.

IO/M bar − This is a status signal used to differentiate between i/o and memory operations. When it is high, it indicate an i/o operation and when it is low, it indicate memory operation.

S1 and S0 − These status signals, similar to i/o and memory bar, can identify various operations, but they are rarely used in small system.

4 Power supply and frequency signal

Vcc − +5v power supply.

Vss − ground reference.

X, X − A crystal is connected at these two pins. The frequency is internally divided by two operate system at 3-MHz, the crystal should have a frequency of 6-MHz.

CLK out − This signal can be used as the system clock for other devices.

5 Externally initiated signalINTR (i/p) − Interrupt request.

INTA bar (o/p) − It is used as acknowledge interrupt.

Compiled By : Ms. Nandini Sharma Page 38

Page 39: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

TRAP (i/p) − This is non maskable interrupt and has highest priority.

HOLD (i/p) − It is used to hold the executing program.

HLDA (o/p) − Hold acknowledge.

READY (i/p) − This signal is used to delay the microprocessor read or write cycle until a slow responding peripheral is ready to accept or send data.

RESET IN bar − When the signal on this pin goes low, the program counter is set to zero, the bus are tri-stated, & MPU is reset.

RESET OUT − This signal indicate that MPU is being reset. The signal can be used to reset other devices.

RST 7.5, RST 6.5, RST 5.5 (Request interrupt) − It is used to transfer the program control to specific memory location. They have higher priority than INTR interrupt.

6 Serial I/O ports The 8085 microprocessor has two signals to implement the serial transmission serial input data and serial output data.

Instruction Format

Each instruction is represented by a sequence of bits within the computer. The instruction is divided into group of bits called field. The way instruction is expressed is known as instruction format. It is usually represented in the form of rectangular box. The instruction format may be of the following types.

Variable Instruction Formats

These are the instruction formats in which the instruction length varies on the basis of opcode & address specifiers. For Example, VAX instruction vary between 1 and 53 bytes while X86 instruction vary between 1 and 17 bytes.

Format

Compiled By : Ms. Nandini Sharma Page 39

Page 40: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Advantage

These formats have good code density.

Drawback

These instruction formats are very difficult to decode and pipeline.

Fixed Instruction Formats

In this type of instruction format, all instructions are of same size. For Example, MIPS, Power PC, Alpha, ARM.

Format

Advantage

They are easy to decode & pipeline.

Drawback

They don't have good code density.

Hybrid Instruction Formats

In this type of instruction formats, we have multiple format length specified by opcode. For example, IBM 360/70, MIPS 16, Thumb.

Format

Advantage

Compiled By : Ms. Nandini Sharma Page 40

Page 41: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

These compromise between code density & instruction of these type are very easy to decode.

Addressing Modes

Addressing mode provides different ways for accessing an address to given data to a processor. Operated data is stored in the memory location, each instruction required certain data on which it has to operate. There are various techniques to specify address of data. These techniques are called Addressing Modes.

Direct addressing mode − In the direct addressing mode, address of the operand is given in the instruction and data is available in the memory location which is provided in instruction. We will move this data in desired location.

Indirect addressing mode − In the indirect addressing mode, the instruction specifies a register which contain the address of the operand. Both internal RAM and external RAM can be accessed via indirect addressing mode.

Immediate addressing mode − In the immediate addressing mode, direct data is given in the operand which move the data in accumulator. It is very fast.

Relative addressing mode − In the relative address mode, the effective address is determined by the index mode by using the program counter in stead of general purpose processor register. This mode is called relative address mode.

Index addressing mode − In the index address mode, the effective address of the operand is generated by adding a content value to the contents of the register. This mode is called index address mode.

Flynn classification is based on multiplicity of instruction stream and data stream observed by the CPU during program execution. Its proposed by Michael J. Flynn in 1966. These are of 4 types, 3 of which is available commercially.

SISD(Single instruction stream and single data stream):- Sequential execution of instruction is processed by one CPU. CPU contains one PE (processing element), i.e. one ALU under one control unit. This is the conventional serial computers.

SIMD(Single Instruction and multiple data stream) :- Same instruction is executed by different processors on different data stream. Instruction streams are broadcasted

Compiled By : Ms. Nandini Sharma Page 41

Page 42: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

from same CU to different processors. Main memory is divided into multiple modules, generating unique data stream for each processors.

MISD(Multiple Instruction and single data streams):- Multiple processing elements are organised under the control of multiple control units. Each control unit is handling one instruction stream and processed through its corresponding processing element and processing elements handle one data stream. No commercially available are built using this organization.

MIMD(Multiple Instruction and Multiple data streams) :- Each processor fetches its own instruction streams and operates on own data stream. Memory is devided into different modules to generate unique data stream for each processing element. Each processing element has its own control unit. Each processor can start or finish at different time, so everything runs asynchronously(unlike SIMD).

Compiled By : Ms. Nandini Sharma Page 42

Page 43: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

An Operating System (OS) is an interface between a computer user and computer hardware. An operating system is a software which performs all the basic tasks like file management, memory management, process management, handling input and output, and controlling peripheral devices such as disk drives and printers.

Some popular Operating Systems include Linux, Windows, OS X, VMS, OS/400, AIX, z/OS, etc.

Definition

An operating system is a system software , which directly interact with hardware and creates an interface between application software and users.

Compiled By : Ms. Nandini Sharma Page 43

Page 44: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Following are some of important functions of an operating System.

Memory Management

Processor Management

Device Management

File Management

Security

Control over system performance

Job accounting

Error detecting aids

Coordination between other software and users

Memory Management

Memory management refers to management of Primary Memory or Main Memory. Main memory is a large array of words or bytes where each word or byte has its own address.

Main memory provides a fast storage that can be accessed directly by the CPU. For a program to be executed, it must in the main memory. An Operating System does the following activities for memory management −

Keeps tracks of primary memory, i.e., what part of it are in use by whom, what part are not in use.

In multiprogramming, the OS decides which process will get memory when and how much.

Allocates the memory when a process requests it to do so.

De-allocates the memory when a process no longer needs it or has been terminated.

Processor Management

In multiprogramming environment, the OS decides which process gets the processor when and for how much time. This function is called process scheduling. An Operating System does the following activities for processor management −

Keeps tracks of processor and status of process. The program responsible for this task is known as traffic controller.

Allocates the processor (CPU) to a process.

De-allocates processor when a process is no longer required.

Compiled By : Ms. Nandini Sharma Page 44

Page 45: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Device Management

An Operating System manages device communication via their respective drivers. It does the following activities for device management −

Keeps tracks of all devices. Program responsible for this task is known as the I/O controller.

Decides which process gets the device when and for how much time.

Allocates the device in the efficient way.

De-allocates devices.

File Management

A file system is normally organized into directories for easy navigation and usage. These directories may contain files and other directions.

An Operating System does the following activities for file management −

Keeps track of information, location, uses, status etc. The collective facilities are often known as file system.

Decides who gets the resources.

Allocates the resources.

De-allocates the resources.

Other Important Activities

Following are some of the important activities that an Operating System performs −

Security − By means of password and similar other techniques, it prevents unauthorized access to programs and data.

Control over system performance − Recording delays between request for a service and response from the system.

Job accounting − Keeping track of time and resources used by various jobs and users.

Error detecting aids − Production of dumps, traces, error messages, and other debugging and error detecting aids.

Coordination between other software and users − Coordination and assignment of compilers, interpreters, assemblers and other software to the various users of the computer systems.

Compiled By : Ms. Nandini Sharma Page 45

Page 46: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Types of Operating System

SIMPLE BATCH SYSTEMS

In this type of system, there is no direct interaction between user and the computer.

The user has to submit a job (written on cards or tape) to a computer operator.

Then computer operator places a batch of several jobs on an input device.

Jobs are batched together by type of languages and requirement.

Then a special program, the monitor, manages the execution of each program in the batch.

The monitor is always in the main memory and available for execution.

Following are some disadvantages of this type of system :

1. Zero interaction between user and computer.

2. No mechanism to prioritize processes.

MULTIPROGRAMMING BATCH SYSTEMS

In this the operating system, picks and begins to execute one job from memory.

Once this job needs an I/O operation operating system switches to another job (CPU and OS always busy).

Jobs in the memory are always less than the number of jobs on disk(Job Pool).

If several jobs are ready to run at the same time, then system chooses which one to run (CPU Scheduling).

In Non-multiprogrammed system, there are moments when CPU sits idle and does not do any work.

In Multiprogramming system, CPU will never be idle and keeps on processing.

Time-Sharing Systems are very similar to Multiprogramming batch systems. In fact time sharing systems are an extension of multiprogramming systems.

In time sharing systems the prime focus is on minimizing the response time, while in multiprogramming the prime focus is to maximize the CPU usage.

MULTIPROCESSOR SYSTEMS

A multiprocessor system consists of several processors that share a common physical memory. Multiprocessor system provides higher computing power and speed. In

Compiled By : Ms. Nandini Sharma Page 46

Page 47: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

multiprocessor system all processors operate under single operating system. Multiplicity of the processors and how they do act together are transparent to the others.

Following are some advantages of this type of system.

1. Enhanced performance

2. Execution of several tasks by different processors concurrently, increases the system's throughput without speeding up the execution of a single task.

3. If possible, system divides task into many subtasks and then these subtasks can be executed in parallel in different processors. Thereby speeding up the execution of single tasks.

DISTRIBUTED OPERATING SYSTEMS

The motivation behind developing distributed operating systems is the availability of powerful and inexpensive microprocessors and advances in communication technology.

These advancements in technology have made it possible to design and develop distributed systems comprising of many computers that are inter connected by communication networks. The main benefit of distributed systems is its low price/performance ratio.

Following are some advantages of this type of system.

1. As there are multiple systems involved, user at one site can utilize the resources of systems at other sites for resource-intensive tasks.

2. Fast processing.

3. Less load on the Host Machine.

REAL-TIME OPERATING SYSTEM

It is defined as an operating system known to give maximum time for each of the critical operations that it performs, like OS calls and interrupt handling.

The Real-Time Operating system which guarantees the maximum time for critical operations and complete them on time are referred to as Hard Real-Time Operating Systems.

While the real-time operating systems that can only guarantee a maximum of the time, i.e. the critical task will get priority over other tasks, but no assurity of completeing it in a defined time. These systems are referred to as Soft Real-Time Operating Systems.

The simplest way to examine the advantages and disadvantages of RISC architecture is by contrasting it with it's predecessor: CISC (Complex Instruction Set Computers) architecture.

Compiled By : Ms. Nandini Sharma Page 47

Page 48: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

RISC Vs CISC

Multiplying Two Numbers in MemoryOn the right is a diagram representing the storage scheme for a generic computer. The main memory is divided into locations numbered from (row) 1: (column) 1 to (row) 6: (column) 4. The execution unit is responsible for carrying out all computations. However, the execution unit can only operate on data that has been loaded into one of the six registers (A, B, C, D, E, or F). Let's say we want to find the product of two numbers - one stored in location 2:3 and another stored in location 5:2 - and then store the product back in the location 2:3.

The CISC Approach The primary goal of CISC architecture is to complete a task in as few lines of assembly as possible. This is achieved by building processor hardware that is capable of understanding and executing a series of operations. For this particular task, a CISC processor would come prepared with a specific instruction (we'll call it "MULT"). When executed, this instruction loads the two values into separate registers, multiplies the operands in the execution unit, and then stores the product in the appropriate register. Thus, the entire task of multiplying two numbers can be completed with one instruction:

MULT 2:3, 5:2

MULT is what is known as a "complex instruction." It operates directly on the computer's memory banks and does not require the programmer to explicitly call any loading or storing functions. It closely resembles a command in a higher level language. For instance, if we let "a" represent the value of 2:3 and "b" represent the value of 5:2, then this command is identical to the C statement "a = a * b."

One of the primary advantages of this system is that the compiler has to do very little work to translate a high-level language statement into assembly. Because the length of the code is relatively short, very little RAM is required to store instructions. The emphasis is put on building complex instructions directly into the hardware.

The RISC Approach RISC processors only use simple instructions that can be executed within one clock cycle. Thus, the "MULT" command described above could be divided into three separate commands: "LOAD," which moves data from the memory bank to a register, "PROD," which finds the product of two operands located within the registers, and "STORE," which moves data from a register to the memory banks. In order to perform the exact series of steps described in the CISC approach, a programmer would need to code four lines of assembly:

Compiled By : Ms. Nandini Sharma Page 48

Page 49: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

LOAD A, 2:3LOAD B, 5:2PROD A, BSTORE 2:3, A

At first, this may seem like a much less efficient way of completing the operation. Because there are more lines of code, more RAM is needed to store the assembly level instructions. The compiler must also perform more work to convert a high-level language statement into code of this form.

However, the RISC strategy also brings some very important advantages. Because each instruction requires only one clock cycle to execute, the entire program will execute in approximately the same amount of time as the multi-cycle "MULT" command. These RISC "reduced instructions" require less transistors of hardware space than the complex instructions, leaving more room for general

purpose registers. Because all of the instructions execute in a uniform amount of time (i.e. one clock), pipelining is possible.

Separating the "LOAD" and "STORE" instructions actually reduces the amount of work that the computer must perform. After a CISC-style "MULT" command is executed, the processor automatically erases the registers. If one of the operands needs to be used for another computation, the processor must re-load the data from the memory bank into a register. In RISC, the operand will remain in the register until another value is loaded in its place.

The Performance EquationThe following equation is commonly used for expressing a computer's performance ability:

The CISC approach attempts to minimize the number of instructions per program, sacrificing the number of cycles per instruction. RISC does the opposite, reducing the cycles per instruction at the cost of the number of instructions per program.

RISC Roadblocks Despite the advantages of RISC based processing, RISC chips took over a decade to gain a foothold in the commercial world. This was largely due to a lack of software support.

Although Apple's Power Macintosh line featured RISC-based chips and Windows NT was RISC compatible, Windows 3.1 and Windows 95 were

Compiled By : Ms. Nandini Sharma Page 49

CISC RISCEmphasis on hardware Emphasis on softwareIncludes multi-clockcomplex instructions

Single-clock,reduced instruction only

Memory-to-memory:"LOAD" and "STORE"incorporated in instructions

Register to register:"LOAD" and "STORE"are independent instructions

Small code sizes,high cycles per second

Low cycles per second,large code sizes

Transistors used for storingcomplex instructions

Spends more transistorson memory registers

Page 50: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

designed with CISC processors in mind. Many companies were unwilling to take a chance with the emerging RISC technology. Without commercial interest, processor developers were unable to manufacture RISC chips in large enough volumes to make their price competitive.

Another major setback was the presence of Intel. Although their CISC chips were becoming increasingly unwieldy and difficult to develop, Intel had the resources to plow through development and produce powerful processors. Although RISC chips might surpass Intel's efforts in specific areas, the differences were not great enough to persuade buyers to change technologies.

The Overall RISC Advantage Today, the Intel x86 is arguable the only chip which retains CISC architecture. This is primarily due to advancements in other areas of computer technology. The price of RAM has decreased dramatically. In 1977, 1MB of DRAM cost about $5,000. By 1994, the same amount of memory cost only $6 (when adjusted for inflation). Compiler technology has also become more sophisticated, so that the RISC use of RAM and emphasis on software has become ideal.

Registers

A register is a discrete memory location within the CPU designed to hold temporary data and instructions. A modern CPU will hold a number of registers. There are a number of general purpose registers that the programmer can use to hold intermediate results whilst working through a calculation or algorithm.

Then there are special-purpose registers designed to carry out a specific role. Each of these registers are given a name so that the programmer can write their software code to access them. Different manufacturers of CPU chips call them by different names (which makes life interesting for a professional programmer!)

But generically these include

Program Counter (PC)

This holds the address in memory of the next instruction.

For example if the program counter has the address 305 then the next instruction will be at location 305 in main memory (RAM). When a program is running, the program counter will often just be incrementing as it addresses one instruction after the other, e.g. 305, 306, 307. However, the instructions will often modify the next address, for example, 305 becomes 39. What has happened is called a 'jump instruction'. This is how the software programmer will cause different parts of his code to run depending on some condition e.g. a conditional IF statement. It is also how an interrupt routine is serviced. The program counter will be loaded with the starting address of the interrupt routine.

Current Instruction Register (CIR)

This holds the current instruction to be executed, having been fetched from memory.

Compiled By : Ms. Nandini Sharma Page 50

Page 51: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Program Status Word (PSW)

The PSW has a number of duties all rolled into one.

The Arithmetic Logic Unit compares two data items together, and it arranges for the result of that comparison to appear in this register i.e. the result of 'greater than' etc.

The PSW also indicates if program conditions have been met that would lead to a jump to a different part of the program. In programming terms this means the result of an IF statement. An IF statement is important in any programming language as it allows execution to jump from one set of instructions to another.

The PSW also holds error flags that indicate a number of problems that may have happened as a result of an instruction, such as 'overflow' which means a calculation has exceeded it allowed number range.

A commonly used term is 'flag'. This denotes a single binary bit within a register. They are often used to indicate a true or false condition.

Memory Address Register (MAR)

Remember that data and program instructions have to fetched from memory? The memory address register, or MAR, holds the location in memory (address) of the next piece of data or program to be fetched (or stored).

 Memory Buffer Register or Memory Data Register

When the data or program instruction is fetched from memory, it is temporarily held in the 'Memory Buffer Register' or MBR for short sometimes also called the Memory Data Register or MDR

A 'buffer' is a commonly used computer term to describe memory designed to hold data that is on its way to somewhere else.

A memory buffer is a bit like the buffers on a train carriage, as the carriages connect with each other, the buffers will soak up the force. In memory, a bunch of data is absorbed quickly, then released at a controlled rate.

Compiled By : Ms. Nandini Sharma Page 51

Page 52: Computer Architecture & Organization

SRCEM Computer Architecture & Organization 2017

Compiled By : Ms. Nandini Sharma Page 52


Recommended