39
Q.-1. Describe the concept of Binary arithmetic with suitable numerical examples. Binary arithmetic Arithmetic in binary is much like arithmetic in other numeral systems. Addition, subtraction, multiplication, and division can be performed on binary numerals. [edit] Addition The circuit diagram for a binary half adder, which adds two bits together, producing sum and carry bits. The simplest arithmetic operation in binary is addition. Adding two single- digit binary numbers is relatively simple, using a form of carrying: 0 + 0 → 0 0 + 1 → 1 1 + 0 → 1 1 + 1 → 10, carry 1 (since 1 + 1 = 0 + 1 × binary 10) Adding two "1" digits produces a digit "0", while 1 will have to be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented: 5 + 5 → 0, carry 1 (since 5 + 5 = 0 + 1 × 10) 7 + 9 → 6, carry 1 (since 7 + 9 = 6 + 1 × 10) This is known as carrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary: 1 1 1 1 1 (carried digits) 0 1 1 0 1 + 1 0 1 1 1 ------------- = 1 0 0 1 0 0 In this example, two numerals are being added together: 011012 (1310) and 101112 (2310). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 102 again; the 1 is carried, and 0 is written at

MCA Assignment for MC0062-SMU

Embed Size (px)

Citation preview

Page 1: MCA Assignment for MC0062-SMU

Q.-1. Describe the concept of Binary arithmetic with suitable numerical examples.

Binary arithmetic

Arithmetic in binary is much like arithmetic in other numeral systems. Addition, subtraction, multiplication, and division can be performed on binary numerals.[edit] AdditionThe circuit diagram for a binary half adder, which adds two bits together, producing sum and carry bits.

The simplest arithmetic operation in binary is addition. Adding two single-digit binary numbers is relatively simple, using a form of carrying:

0 + 0 → 0 0 + 1 → 1 1 + 0 → 1 1 + 1 → 10, carry 1 (since 1 + 1 = 0 + 1 × binary 10)

Adding two "1" digits produces a digit "0", while 1 will have to be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented:

5 + 5 → 0, carry 1 (since 5 + 5 = 0 + 1 × 10) 7 + 9 → 6, carry 1 (since 7 + 9 = 6 + 1 × 10)

This is known as carrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary:

1 1 1 1 1 (carried digits) 0 1 1 0 1+ 1 0 1 1 1-------------= 1 0 0 1 0 0

In this example, two numerals are being added together: 011012 (1310) and 101112 (2310). The top row shows the carry bits used. Starting in the rightmost column, 1 + 1 = 102. The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: 1 + 0 + 1 = 102 again; the 1 is carried, and 0 is written at the bottom. The third column: 1 + 1 + 1 = 112. This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002 (36 decimal).

When computers must add two numbers, the rule that: x xor y = (x + y) mod 2 for any two bits x and y allows for very fast calculation, as well.

A simplification for many binary addition problems is the Long Carry Method or Brookhouse Method of Binary Addition. This method is generally useful in any binary addition where one of the numbers has a long string of “1” digits. For example the following large binary numbers can be added in two simple steps without multiple carries from one place to the next.

(carried digits) Versus 1 1 1 1 1 1 1 1 (Long Carry Method) 1 1 1 0 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 0+ 1 0 1 0 1 1 0 0 1 1 + 1 0 1 0 1 1 0 0 1 1 (add

Page 2: MCA Assignment for MC0062-SMU

crossed out digits first)

----------------------- + 1 0 0 0 1 0 0 0 0 0 0 = (sum of crossed out digits) (now add remaining digits)

= 1 1 0 0 1 1 1 0 0 0 1 --------------------------- 1 1 0 0 1 1 1 0 0 0 1

In this example, two numerals are being added together: 1 1 1 0 1 1 1 1 1 02 (95810) and 1 0 1 0 1 1 0 0 1 12 (69110). The top row shows the carry bits used. Instead of the standard carry from one column to the next, the lowest place-valued "1" with a "1" in the corresponding place value beneath it may be added and a "1" may be carried to one digit past the end of the series. These numbers must be crossed off since they are already added. Then simply add that result to the uncanceled digits in the second row. Proceeding like this gives the final answer 1 1 0 0 1 1 1 0 0 0 12 (164910).

Addition table

0 10 0 11 1 10

[edit] Subtraction

Subtraction works in much the same way:

0 − 0 → 0 0 − 1 → 1, borrow 1 1 − 0 → 1 1 − 1 → 0

Subtracting a "1" digit from a "0" digit produces the digit "1", while 1 will have to be subtracted from the next column. This is known as borrowing. The principle is the same as for carrying. When the result of a subtraction is less than 0, the least possible value of a digit, the procedure is to "borrow" the deficit divided by the radix (that is, 10/10) from the left, subtracting it from the next positional value.

* * * * (starred columns are borrowed from) 1 1 0 1 1 1 0− 1 0 1 1 1----------------= 1 0 1 0 1 1 1

Subtracting a positive number is equivalent to adding a negative number of equal absolute value; computers typically use two's complement notation to represent negative values. This notation eliminates the need for a separate "subtract" operation. Using two's complement notation subtraction can be summarized by the following formula:

A − B = A + not B + 1

For further details, see two's complement.[edit] Multiplication

Multiplication in binary is similar to its decimal counterpart. Two numbers A and B can be multiplied by partial products: for each digit in B, the product of that digit in A is calculated and written on a new line, shifted leftward so

Page 3: MCA Assignment for MC0062-SMU

that its rightmost digit lines up with the digit in B that was used. The sum of all these partial products gives the final result.

Since there are only two digits in binary, there are only two possible outcomes of each partial multiplication:

* If the digit in B is 0, the partial product is also 0 * If the digit in B is 1, the partial product is equal to A

For example, the binary numbers 1011 and 1010 are multiplied as follows:

1 0 1 1 (A) × 1 0 1 0 (B) --------- 0 0 0 0 ← Corresponds to a zero in B + 1 0 1 1 ← Corresponds to a one in B + 0 0 0 0 + 1 0 1 1 --------------- = 1 1 0 1 1 1 0

Binary numbers can also be multiplied with bits after a binary point:

1 0 1.1 0 1 (A) (5.625 in decimal) × 1 1 0.0 1 (B) (6.25 in decimal) ------------- 1 0 1 1 0 1 ← Corresponds to a one in B + 0 0 0 0 0 0 ← Corresponds to a zero in B + 0 0 0 0 0 0 + 1 0 1 1 0 1 + 1 0 1 1 0 1 ----------------------- = 1 0 0 0 1 1.0 0 1 0 1 (35.15625 in decimal)

See also Booth's multiplication algorithm.[edit] Multiplication table

0 10 0 01 0 1[edit] Division

See also: Division (digital)

Binary division is again similar to its decimal counterpart:

Here, the divisor is 1012, or 5 decimal, while the dividend is 110112, or 27 decimal. The procedure is the same as that of decimal long division; here, the divisor 1012 goes into the first three digits 1102 of the dividend one time, so a "1" is written on the top line. This result is multiplied by the divisor, and subtracted from the first three digits of the dividend; the next digit (a "1") is included to obtain a new three-digit sequence:

1 ___________1 0 1 ) 1 1 0 1 1 − 1 0 1 ----- 0 1 1

Page 4: MCA Assignment for MC0062-SMU

The procedure is then repeated with the new sequence, continuing until the digits in the dividend have been exhausted:

1 0 1 ___________1 0 1 ) 1 1 0 1 1 − 1 0 1 ----- 0 1 1 − 0 0 0 ----- 1 1 1 − 1 0 1 ----- 1 0

Q-2. Explain the Boolean rules with relevant examples.

Ans- Boolean algebra finds its most practical use in the simplification of logic circuits. If we translate a logic circuit's function into symbolic (Boolean) form, and apply certain algebraic rules to the resulting equation to reduce the number of terms and/or arithmetic operations, the simplified equation may be translated back into circuit form for a logic circuit performing the same function with fewer components. If equivalent function may be achieved with fewer components, the result will be increased reliability and decreased cost of manufacture.

To this end, there are several rules of Boolean algebra presented in this section for use in reducing expressions to their simplest forms. The identities and properties already reviewed in this chapter are very useful in Boolean simplification, and for the most part bear similarity to many identities and properties of "normal" algebra. However, the rules shown in this section are all unique to Boolean mathematics.

Example:

Page 5: MCA Assignment for MC0062-SMU

This rule may be proven symbolically by factoring an "A" out of the two terms, then applying the rules of A + 1 = 1 and 1A = A to achieve the final result:

Please note how the rule A + 1 = 1 was used to reduce the (B + 1) term to 1. When a rule like "A + 1 = 1" is expressed using the letter "A", it doesn't mean it only applies to expressions containing "A". What the "A" stands for in a rule like A + 1 = 1 is any Boolean variable or collection of variables. This is perhaps the most difficult concept for new students to master in Boolean simplification: applying standardized identities, properties, and rules to expressions not in standard form.

For instance, the Boolean expression ABC + 1 also reduces to 1 by means of the "A + 1 = 1" identity. In this case, we recognize that the "A" term in the identity's standard form can represent the entire "ABC" term in the original expression.

The next rule looks similar to the first one shown in this section, but is actually quite different and requires a more clever proof:

Page 6: MCA Assignment for MC0062-SMU

Note how the last rule (A + AB = A) is used to "un-simplify" the first "A" term in the expression, changing the "A" into an "A + AB". While this may seem like a backward step, it certainly helped to reduce the expression to something simpler! Sometimes in mathematics we must take "backward" steps to achieve the most elegant solution. Knowing when to take such a step and when not to is part of the art-form of algebra, just as a victory in a game of chess almost always requires calculated sacrifices.

Another rule involves the simplification of a product-of-sums expression:

Page 7: MCA Assignment for MC0062-SMU

And, the corresponding proof:

To summarize, here are the three new rules of Boolean simplification expounded in this section:

Definition of Boolean Expressions

Page 8: MCA Assignment for MC0062-SMU

                A Boolean expression is defined as the expression in which constituents have one of two rates and the algebraic processes defined on the set are logical OR, a type of addition, and logical AND, a type of multiplication.

               Boolean expression varies from the ordinary expression in three ways: the values that variables may presume, which are a logical instead of a numeric quality, perfectly 0 and 1; in the processes related to these values; and in the properties of those processes, that is, the laws that they follow. Expressions contain mathematical logical, digital logics, computer programming, set theory, and statistics.

Laws of Boolean Algebra:

      Laws of Boolean algebra:   

                 (1a)       x · y = y · x

                 (1b)       x + y = y + x

                 (1c)       1 + x = 1

                 (2a)       x · (y · z) = (x · y) · z

                 (2b)       x + (y + z) = (x + y) + z

                 (3a)       x · (y + z) = (x · y) + (x · z)

                 (3b)       x + (y · z) = (x + y) · (x + z)

                 (4a)       x · x =  x

                 (4b)       x + x = x

                 (5a)       x · (x + y) = x

                 (5b)       x + (x · y) = x

                 (6a)       x · x1 = 0

                 (6b)       x + x1 = 1

                 (7)          (x1)1 = x

                 (8a)        (x · y)1 = x1 + y1

                 (8b)        (x + y) 1 = x1 · y1

          Thus the above rules are used to translate the logic gates directly. Although some of the rules can be derived from simpler identities derived in the packages.                                   

Example for Boolean Expressions:

Example 1: Determine the OR operations using Boolean expressions.

Page 9: MCA Assignment for MC0062-SMU

Solution: Construct a truth table for OR Operation,

               x         y          x V y

               F         F             F

               F         T             T

               T         F             T

               T         T             T

Example 2: Determine the AND operations using Boolean expressions.

Solution: Construct a truth table for AND Operation,

               x         y          x Λ y

               F         F             F

               F         T             F

               T         F             F

               T         T             T

Example3 : Determine the NOT operations using Boolean expressions.

Solution: Construct a truth table for NOT Operation,

               x         ¬x

               F          T

               T          F

Page 10: MCA Assignment for MC0062-SMU

Q-3Explain Karnaugh maps with your own examples illustrating the formation.Ans-

Karnaugh map

K-map showing minterms and boxes covering the desired minterms. The brown region is an overlapping of the red (square) and green regions.

The input variables can be combined in 16 different ways, so the Karnaugh map has 16 positions, and therefore is arranged in a 4 × 4 grid.

The binary digits in the map represent the function's output for any given combination of inputs. So 0 is written in the upper leftmost corner of the map because ƒ = 0 when A = 0, B = 0, C = 0, D = 0. Similarly we mark the bottom right corner as 1 because A = 1, B = 0, C = 1, D = 0 gives ƒ = 1. Note that the values are ordered in a Gray code, so that precisely one variable changes between any pair of adjacent cells.

After the Karnaugh map has been constructed the next task is to find the minimal terms to use in the final expression. These terms are found by encircling groups of 1s in the map. The groups must be rectangular and must have an area that is a power of two (i.e. 1, 2, 4, 8…). The rectangles should be as large as possible without containing any 0s. The optimal groupings in this map are marked by the green, red and blue lines. Note that groups may overlap. In this example, the red and green groups overlap. The red group is a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is indicated in brown.

The grid is toroidally connected, which means that the rectangular groups can wrap around edges, so is a valid term, although not part of the minimal set—this covers Minterms 8, 10, 12, and 14.

Perhaps the hardest-to-visualize wrap-around term is which covers the four corners—this covers minterms 0, 2, 8, 10.

Examples:

Page 11: MCA Assignment for MC0062-SMU

Karnaugh maps are used to facilitate the simplification of Boolean algebra functions. The following is an unsimplified Boolean Algebra function with Boolean variables A, B, C, D, and their inverses. They can be represented in two different functions:

f(A,B,C,D) = ∑(6,8,9,10,11,12,13,14) Note: The values inside ∑ are the minterms to map (i.e. which

rows have output 1 in the truth table).

[edit] Truth table

Using the defined minterms, the truth table can be created:

# A B C D f(A,B,C,D)0 0 0 0 0 01 0 0 0 1 02 0 0 1 0 03 0 0 1 1 04 0 1 0 0 05 0

1 0 1 0

6 0 1 1 0 17 0 1 1 1 08 1 0 0 0 19 1 0 0 1 110 1 0 1 0 111 1 0 1 1 112 1 1 0 0 113 1 1 0 1 114 1 1 1 0 115 1 1 1 1 0

Parity generator

This picture show the function : parity generator.The input is 1(0) if is input with cans (or odd) number of 1(0). This function is MAX lengt of 4 input karnaugh map - as you can see there is no way how make simple this function - no 1(0) has not 1(0) as neighbor

Page 12: MCA Assignment for MC0062-SMU

Karnaugh map for 5 (five) inputsMore people do not know how create map for 5 (five) inputs. Next picture showing how you can do it. And show you different rules on 5 inputs karnaugh map:

The adderThe basic circuit is a adder. There is a example of basic component of adder (also you can use half-adders).

Q-4. With a neat labeled diagram explain the working of binary half and full adders.

Ans.

A half adder is a logical circuit that performs an addition operation on two binary digits. The half adder produces a sum and a carry value which are both binary digits.

Page 13: MCA Assignment for MC0062-SMU

A full adder is a logical circuit that performs an addition operation on three binary digits. The full adder produces a sum and carry value, which are both binary digits. It can be combined with other full adders or work on its own.

Q.-5. Describe the concept of timing diagrams and synchronous logic.Ans-Timing Diagrams:

Timing diagrams are the main key in understanding digital systems. Timing diagrams explain digital circuitry functioning during time flow. Timing diagrams help to understand how digital circuits or sub circuits should work or fit in to larger circuit system. So learning how to read Timing diagrams may increase your work with digital systems and integrate them.

Bellow is a list o most commonly used timing diagram fragments:

Low level to supply voltage:

Transition to low or high level:

Page 14: MCA Assignment for MC0062-SMU

Bus signals – parallel signals transitioning from one level to other:

High Impedance state:

Bus signal with floating impedance:

Conditional change of on signal depending on another signal transition:

Transition on a signal causes state changes in a BUS:

More than one transition causes changes in a BUS:

Page 15: MCA Assignment for MC0062-SMU

Sequential transition – one signal transition causes another signal transition and second signal transition causes third signal transition.

As you see timing diagrams together with digital circuit can completely describe the circuits working. To understand the timing diagrams, you should follow all symbols and transitions in timing diagrams.

You can find plenty of symbols in timing diagrams. It depends actually on designer or circuit manufacturer. But once you understand the whole picture, you can easy read any timing diagram like in this example:

Page 16: MCA Assignment for MC0062-SMU

Synchronous sequential logic

Nearly all sequential logic today is 'clocked' or 'synchronous' logic: there is a 'clock' signal, and all internal memory (the 'internal state') changes only on a clock edge. The basic storage element in sequential logic is the flip-flop.

The main advantage of synchronous logic is its simplicity. Every operation in the circuit must be completed inside a fixed interval of time between two clock pulses, called a 'clock cycle'. As long as this condition is met (ignoring certain other details), the circuit is guaranteed to be reliable. Synchronous logic also has two main disadvantages, as follows.

1. The clock signal must be distributed to every flip-flop in the circuit. As the clock is usually a high-frequency signal, this distribution consumes a relatively large amount of power and dissipates much heat. Even the flip-flops that are doing nothing consume a small amount of power, thereby generating waste heat in the chip.

2. The maximum possible clock rate is determined by the slowest logic path in the circuit, otherwise known as the critical path. This means that every logical calculation, from the simplest to the most complex, must complete in one clock cycle. One way around this limitation is to split complex operations into several simple operations, a technique known as 'pipelining'. This technique is prominent within microprocessor design, and helps to improve the performance of modern processors.

Q.6- Discuss the Quine McCluskey method.Ans-

Page 17: MCA Assignment for MC0062-SMU

Quine–McCluskey algorithm

The Quine–McCluskey algorithm (or the method of prime implicants) is a method used for minimization of boolean functions which was developed by W.V. Quine and Edward J. McCluskey. It is functionally identical to Karnaugh mapping, but the tabular form makes it more efficient for use in computer algorithms, and it also gives a deterministic way to check that the minimal form of a Boolean function has been reached. It is sometimes referred to as the tabulation method.

The method involves two steps:

1. Finding all prime implicants of the function.

2. Use those prime implicants in a prime implicant chart to find the essential prime implicants of the function, as well as other prime implicants that are necessary to cover the function.

Complexity:Although more practical than Karnaugh mapping when dealing with more than four variables, the Quine–McCluskey algorithm also has a limited range of use since the problem it solves is NP-hard: the runtime of the Quine–McCluskey algorithm grows exponentially with the number of variables. It can be shown that for a function of n variables the upper bound on the number of prime implicants is 3n/n. If n = 32 there may be over 6.5 * 1015 prime implicants. Functions with a large number of variables have to be minimized with potentially non-optimal heuristic methods, of which the Espresso heuristic logic minimizer is the de-facto standard

Step 1: finding prime implicants

Minimizing an arbitrary function:

A B C D fm0 0 0 0 0 0m1 0 0 0 1 0m2 0 0 1 0 0m3 0 0 1 1 0m4 0 1 0 0 1m5 0 1 0 1 0m6 0 1 1 0 0m7 0 1 1 1 0m8 1 0 0 0 1m9 1 0 0 1 xm10 1 0 1 0 1m11 1 0 1 1 1m12 1 1 0 0 1m13 1 1 0 1 0m14 1 1 1 0 xm15 1 1 1 1 1

Page 18: MCA Assignment for MC0062-SMU

One can easily form the canonical sum of products expression from this table, simply by summing the minterms (leaving out don't-care terms) where the function evaluates to one:

fA,B,C,D = A'BC'D' + AB'C'D' + AB'CD' + AB'CD + ABC'D' + ABCD.

Of course, that's certainly not minimal. So to optimize, all minterms that evaluate to one are first placed in a minterm table. Don't-care terms are also added into this table, so they can be combined with minterms:

Number of 1s Minterm Binary Representation

1m4 0100m8 1000

2m9 1001m10 1010m12 1100

3m11 1011m14 1110

4 m15 1111

At this point, one can start combining minterms with other minterms. If two terms vary by only a single digit changing, that digit can be replaced with a dash indicating that the digit doesn't matter. Terms that can't be combined any more are marked with a "*". When going from Size 2 to Size 4, treat '-' as a third bit value. Ex: -110 and -100 or -11- can be combined, but not -110 and 011-. (Trick: Match up the '-' first.)

Number of 1s Minterm 0-Cube Size 2 Implicants Size 4 Implicants

1m4 0100 m(4,12) -100* m(8,9,10,11) 10--*m8 1000 m(8,9) 100- m(8,10,12,14) 1--0*

-- -- -- m(8,10) 10-0 --

2m9 1001 m(8,12) 1-00 m(10,11,14,15) 1-1-*m10 1010 -- --m12 1100 m(9,11) 10-1 --

-- -- -- m(10,11) 101- --

3m11 1011 m(10,14) 1-10 --m14 1110 m(12,14) 11-0 --

4m15 1111 m(11,15) 1-11 --

m(14,15) 111-

Note: In this example, none of the terms in the size 4 implicants table can be combined any further. Be aware that this processing should be continued otherwise (size 8 etc).

[edit] Step 2: prime implicant chart

None of the terms can be combined any further than this, so at this point we construct an essential prime implicant table. Along the side goes the prime implicants that have just been generated, and along the top go the minterms specified earlier. The don't care terms are not placed on top - they are omitted from this section because they are not necessary inputs.

4 8 10 11 12 15 => A B C Dm(4,12)* X X -100 => - 1 0 0m(8,9,10,11) X X X 10-- => 1 0 - -m(8,10,12,14) X X X 1--0 => 1 - - 0m(10,11,14,15)* X X X 1-1- => 1 - 1 -

Page 19: MCA Assignment for MC0062-SMU

Here, each of the essential prime implicants has been starred - the second prime implicant can be 'covered' by the third and fourth, and the third prime implicant can be 'covered' by the second and first, and neither is thus essential. If a prime implicant is essential then, as would be expected, it is necessary to include it in the minimized boolean equation. In some cases, the essential prime implicants do not cover all minterms, in which case additional procedures for chart reduction can be employed. The simplest "additional procedure" is trial and error, but a more systematic way is Petrick's Method. In the current example, the essential prime implicants do not handle all of the minterms, so, in this case, one can combine the essential implicants with one of the two non-essential ones to yield one of these two equations:

Both of those final equations are functionally

Q-7. With appropriate diagrams explain the bus structure of a computer system.

Ans-

BUS structure :

A group of lines that serves as a connecting path for several devices is called bus.In addition to the lines that carry the data, the bus must have lines for address and control purposes.

In computer architecture, a bus is a subsystem that transfers data between computer components inside a computer or between computers

A computer bus structure is provided which permits replacement of removable modules during operation of a computer wherein means are provided to precharge signal output lines to within a predetermined range prior to the usage of the signal output lines to carry signals, and further, wherein means are provided to minimize arcing to pins designed to carry the power and signals of a connector. In a specific embodiment, pin length, i.e., separation between male and female components of the connector, are subdivided into long pin length and short pin length. Ground connections and power connections for each voltage level are assigned to the long pin lengths. Signal connections and a second power connection for each voltage level is assigned to the short pin lengths. The precharge/prebias circuit comprises a resistor divider coupled between a power source and ground with a high impedance tap coupled to a designated signal pin, across which is coupled a charging capacitor or equivalent representing the capacitance of the signal line. Bias is applied to the precharge/prebias circuit for a sufficient length of time to precharge the signal line to a desired neutral signal level between expected high and low signal values prior to connection of the short pin to its mate..

Early computer buses were literally parallel electrical buses with multiple connections, but the term is now used for any physical arrangement that provides the same logical functionality as a parallel electrical bus. Modern computer buses can use both parallel and bit-serial connections, and can be wired in either a multidrop (electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB.

Page 20: MCA Assignment for MC0062-SMU

Description of BUS :

At one time, "bus" meant an electrically parallel system, with electrical conductors similar or identical to the pins on the CPU. This is no longer the case, and modern systems are blurring the lines between buses and networks.

Buses can be parallel buses, which carry data words in parallel on multiple wires, or serial buses, which carry data in bit-serial form. The addition of extra power and control connections, differential drivers, and data connections in each direction usually means that most serial buses have more conductors than the minimum of one used in the 1-Wire and UNI/O serial buses. As data rates increase, the problems of timing skew, power consumption, electromagnetic interference and crosstalk across parallel buses become more and more difficult to circumvent. One partial solution to this problem has been to double pump the bus. Often, a serial bus can actually be operated at higher overall data rates than a parallel bus, despite having fewer electrical connections, because a serial bus inherently has no timing skew or crosstalk. USB, FireWire, and Serial ATA are examples of this. Multidrop connections do not work well for fast serial buses, so most modern serial buses use daisy-chain or hub designs.

Most computers have both internal and external buses. An internal bus connects all the internal components of a computer to the motherboard (and thus, the CPU and internal memory). These types of buses are also referred to as a local bus, because they are intended to connect to local devices, not to those in other machines or external to the computer. An external bus connects external peripherals to the motherboard.

Network connections such as Ethernet are not generally regarded as buses, although the difference is largely conceptual rather than practical. The arrival of technologies such as InfiniBand and HyperTransport is further blurring the boundaries between networks and buses. Even the lines between internal and external are sometimes fuzzy, I²C can be used as both an internal bus, or an external bus (where it is known as ACCESS.bus), and InfiniBand is intended to replace both internal buses like PCI as well as external ones like Fibre Channel. In the typical desktop application, USB serves as a peripheral bus, but it also sees some use as a networking

Page 21: MCA Assignment for MC0062-SMU

utility and for connectivity between different computers, again blurring the conceptual distinction.

Q.8- Explain the CPU organization with relevant diagrams.Ans-

CPU which is the heart of a computer consists of Registers, Control Unit and Arithmetic Logic Unit. The interconnection of these units is achieved through the system bus as shown in figure 4.1.

The following tasks are to be performed by the CPU:

1. Fetch instructions: The CPU must read instructions from the memory.

Page 22: MCA Assignment for MC0062-SMU

2. Interpret instructions: The instructions must be decoded to determine what action is required.

3. Fetch data: The execution of an instruction may require reading data from memory or an I/O module.

4. Process data: The execution of an instruction may require performing some arithmetic or logical operations on data.

5. Write data: The results of an execution may require writing data to the memory or an I/O module.

Function of computer organization:

1.(a) A CPU can be defined as a general purpose instruction set processor responsible for program execution.(b) A CPU consists of address bus, data bus and control bus.(c) A computer

Page 23: MCA Assignment for MC0062-SMU

with one CPU is called microprocessor and with more than one CPU in called multiprocessor.(d) The address bus in used to transfer addresses from the CPU to main memory or to I/O devices

(e) Data bus is the main path by which information is transferred to and from the CPU(f) Control bus is used by CPU to control various devices connected and to synchronise their operations with those of the CPU.

2. (a) A control unit take the instructions one by one to execute. It takes data from input devices and store it in memory. And also sends data from memory to the output device.(b) All arithmetic and logical operations are carried out by Arithmetic Logical Unit(c) A control unit and the arithmetic logical unit together is known as CPU

3. (a) The accumulator is the main register of the ALU(b) In execution of the most of the instructions the accumulator is used to store a input data or output result.(c) Instructions register holds the opcode of the current instruction(d) Memory address register holds the address of the current instructions.

4. A accumulator based CPU consists of (a) data processing unit (b) program control unit and (c) memory and I/O interface unit.(a) (i) In the data processing unit, data is processed to get some results.(ii) the accumulator in the main operand register of the ALU.(b) (i) Program control unit controls various parts of CPU.(ii) Program counter holds the address of the next instructions to be read from memory after the current instruction is executed.(iii) Instruction register holds the opcode of the current instruction.(iv) Control circuits hold the responsibility of every operation of the CPU.(c) (i) Data registers of the memory and I/O interface unit acts as a buffer between the CPU and main memory.(ii) Address register contains the address of the present instructions obtained from the program control unit.

5. (a) Stack pointer and flag register are special registers of CPU.(b) Stack pointers holds the address of the most recently entered item into the stack.(c) Flag registers indicates the status which depends on the results of the operation.

6. (a) Micro operations are the operations executed on data stored in registers.(b) A set of micro operations specified by an instruction is known as macro operation.

7. (a) A sequence of operations involved in processing an instruction constitutes an instruction cycle(b) Fetch cycle in defined as the time required for getting the instruction code from main memory to CPU(c) Executed cycle is the time required to decode and execute an instruction.(d) Fetch cycle requires a fixed time slot and execute cycle requires variable time slot.

8. (a) a word length indicates the number of bits the CPU can process at a time.(b) A memory size indicates the total storage capacity of the CPU.(c) Word length is the indication of bit length of each register.

Page 24: MCA Assignment for MC0062-SMU

9. A computer is said to be operated based on stored program concept if it stores the instructions as well as data of a program in main memory when they are waiting to execute.

10. A stored program in main memory is executed instruction after instruction in a successive method by using program counter.

Q.9: Explain the organization of 8085 processor with the relevant diagrams.

Ans-

The Intel 8085 is an 8-bit microprocessor introduced by Intel in 1977. It was binary-compatible with the more-famous Intel 8080 but required less supporting hardware, thus allowing simpler and less expensive microcomputer systems to be built.

The "5" in the model number came from the fact that the 8085 required only a +5-volt (V) power supply rather than the +5V, -5V and +12V supplies the 8080 needed. Both processors were sometimes used in computers running the CP/M operating system, and the 8085 later saw use as a microcontroller, by virtue of its low component count. Both designs were eclipsed for desktop computers by the compatible Zilog Z80, which took over most of the CP/M computer market as well as taking a share of the booming home computer market in the early-to-mid-1980s.

The 8085 had a long life as a controller. Once designed into such products as the DECtape controller and the VT100 video terminal in the late 1970s, it continued to serve for new production throughout the life span of those products (generally longer than the product life of desktop computers).

The 8085 Architecture follows the "von Neumann architecture", with a 16-bit address bus, and a 8-bit data bus.The 8085 used a multiplexed Data Bus i.e.the address was split between the 8-bit address bus and 8-bit data bus. (For saving Number of Pins)

Registers:

The 8085 can access 216 (= 65,536) individual 8-bit memory locations, or in other words, its address space is 64 KB. Unlike some other microprocessors of its era, it has a separate address space for up to 28 (=256) I/O ports. It also has a built in register array which are usually labeled A (Accumulator), B, C, D, E, H, and L. Further special-purpose registers are the 16-bit Program Counter (PC), Stack Pointer (SP), and 8-bit flag register F. The microprocessor has three maskable interrupts (RST 7.5, RST 6.5 and RST 5.5), one Non-Maskable interrupt (TRAP), and one externally serviced interrupt (INTR). The RST n.5 interrupts refer to actual pins on the processor-a feature which permitted simple systems to avoid the cost of a separate interrupt controller chip.

Buses:

* Address bus - 16 line bus accessing 216 memory locations (64 KB) of memory.* Data bus - 8 line bus accessing one (8-bit) byte of data in one operation. Data bus width is the traditional measure of processor bit designations, as opposed to address bus width, resulting in the 8-bit microprocessor designation.* Control buses - Carries the essential signals for various operations.

Page 25: MCA Assignment for MC0062-SMU

Q-10. Describe the theory of addressing modes.Ans-

Addressing modes are an aspect of the instruction set architecture in most central processing unit (CPU) designs. The various addressing modes that are defined in a given instruction set architecture define how machine language instructions in that architecture identify the operand (or operands) of each instruction. An addressing mode specifies how to calculate the effective memory address of an operand by using information held in registers and/or constants contained within a machine instruction or elsewhere.

In computer programming, addressing modes are primarily of interest to compiler writers and to those who write code directly in assembly language.

How many Addressing modes :

Different computer architectures vary greatly as to the number of addressing modes they provide in hardware. There are some benefits to eliminating complex addressing modes and using only one or a few simpler addressing modes, even though it requires a few extra instructions, and perhaps an extra register.[1] It has proven[citation needed] much easier to design pipelined CPUs if the only addressing modes available are simple ones.

Most RISC machines have only about five simple addressing modes, while CISC machines such as the DEC VAX supermini have over a dozen addressing modes, some of which are quite complicated. The IBM System/360 mainframe had only three addressing modes; a few more have been added for the System/390.

When there are only a few addressing modes, the particular addressing mode required is usually encoded within the instruction code (e.g. IBM System/390, most RISC). But when there are lots of addressing modes, a specific field is often set aside in the instruction to specify the addressing mode. The DEC VAX allowed multiple memory operands for almost all instructions, and so reserved the first few bits of each operand specifier to indicate the addressing mode for that particular operand. Keeping the addressing mode specifier bits separate from the opcode operation bits produces a orthogonal instruction set.

Page 26: MCA Assignment for MC0062-SMU

Even on a computer with many addressing modes, measurements of actual programs[citation needed] indicate that the simple addressing modes listed below account for some 90% or more of all addressing modes used. Since most such measurements are based on code generated from high-level languages by compilers, this reflects to some extent the limitations of the compilers being used. [2]

Simple addressing modes for code

[edit] Absolute +----+------------------------------+ |jump| address | +----+------------------------------+ (Effective PC address = address)

The effective address for an absolute instruction address is the address parameter itself with no modifications.

[edit] PC-relative +----+------------------------------+ |jump| offset | jump relative +----+------------------------------+ (Effective PC address = next instruction address + offset, offset may be negative)

The effective address for a PC-relative instruction address is the offset parameter added to the address of the next instruction. This offset is usually signed to allow reference to code both before and after the instruction.

This is particularly useful in connection with jumps, because typical jumps are to nearby instructions (in a high-level language most if or while statements are reasonably short). Measurements of actual programs suggest that an 8 or 10 bit offset is large enough for some 90% of conditional jumps[citation needed].

Another advantage of program-relative addressing is that the code may be position-independent, i.e. it can be loaded anywhere in memory without the need to adjust any addresses.

Some versions of this addressing mode may be conditional referring to two registers ("jump if reg1==reg2"), one register ("jump unless reg1==0") or no registers, implicitly referring to some previously-set bit in the status register. See also conditional execution below.

[edit] Register indirect +-------+-----+ |jumpVia| reg | +-------+-----+ (Effective PC address = contents of register 'reg')

Page 27: MCA Assignment for MC0062-SMU

The effective address for a Register indirect instruction is the address in the specified register. For example, (A7) to access the content of address register A7.

The effect is to transfer control to the instruction whose address is in the specified register.

Many RISC machines have a subroutine call instruction that places the return address in an address register—the register indirect addressing mode is used to return from that subroutine call.

[edit] Sequential addressing modes

[edit] sequential execution +------+ | nop | execute the following instruction +------+ (Effective PC address = next instruction address)

The CPU, after executing a sequential instruction, immediately executes the following instruction.

Sequential execution is not considered to be an addressing mode on some computers.

Most instructions on most CPU architectures are sequential instructions. Because most instructions are sequential instructions, CPU designers often add features that deliberately sacrifice performance on the other instructions—branch instructions—in order to make these sequential instructions run faster.

Conditional branches load the PC with one of 2 possible results, depending on the condition—most CPU architectures use some other addressing mode for the "taken" branch, and sequential execution for the "not taken" branch.

Many features in modern CPUs -- instruction prefetch and more complex pipelineing, Out-of-order execution, etc. -- maintain the illusion that each instruction finishes before the next one begins, giving the same final results, even though that's not exactly what happens internally.

Each "basic block" of such sequential instructions exhibits both temporal and spatial locality of reference.

[edit] CPUs that do not use sequential execution

CPUs that do not use sequential execution with a program counter are extremely rare. In some CPUs, each instruction always specifies the address of next instruction. Such CPUs have a instruction pointer that holds that specified address, but they do not have a complete program counter. Such CPUs include some drum memory computers, the SECD machine, and the RTX 32P.[3]

Other computing architectures go much further, attempting to bypass the von Neumann bottleneck using a variety of alternatives to the program counter.

Page 28: MCA Assignment for MC0062-SMU

[edit] conditional execution

Some computer architectures (e.g. ARM) have conditional instructions which can in some cases obviate the need for conditional branches and avoid flushing the instruction pipeline. An instruction such as a 'compare' is used to set a condition code, and subsequent instructions include a test on that condition code to see whether they are obeyed or ignored.

[edit] skip +------+-----+-----+ |skipEQ| reg1| reg2| skip the following instruction if reg1=reg2 +------+-----+-----+ (Effective PC address = next instruction address + 1)

Skip addressing may be considered a special kind of PC-relative addressing mode with a fixed "+1" offset. Like PC-relative addressing, some CPUs have versions of this addressing mode that only refer to one register ("skip if reg1==0") or no registers, implicitly referring to some previously-set bit in the status register. Other CPUs have a version that selects a specific bit in a specific byte to test ("skip if bit 7 of reg12 is 0").

Unlike all other conditional branches, a "skip" instruction never needs to flush the instruction pipeline.

Q-11. Describe the following elements of Bus Design:

A) Bus Types B) Arbitration Methods C) Bus TimingAns.-

BUS TYPESBus channels can be separated into two general types, namely a dedicated and multiplexed. A bus line dedicated permanently assigned a function or a subset of physical computer components.

As an example of dedication to the function is the use of separate dedicated address and data lines, which is a common thing for the bus.

However, this is not important. For example, the address and data information can be transmitted through the same number of channels using control channel address is invalid. In the early transfer of data, address bus and address placed on a valid control activated. At this time, each module has a specific time period to copy the address and determine whether the address is a module located. Then address removed from the bus and the bus connection is used for reading or writing data transfer next. Methods use the same line for various purposes is known as time multiplexing.

The advantage is time multiplexing requires fewer channels, which saves space and cost. The disadvantage is the need for a more

Page 29: MCA Assignment for MC0062-SMU

complex circuit within each module. There's also a fairly large decrease in performance due to certain events that use the channels together cannot function in parallel.

Physical dedication associated with the use of multiple buses, each bus it is connected with only a subset of the modules. A common example is the use of bus I / O to interconnect all I / O module, then this bus is connected to the main bus through a kind of adapter module I / O. The main advantage of physical dedication is a high throughput; because of the traffic congestion is only small data. The disadvantage is the increased size and cost of the system.

b)ARBITRATION METHODIn all systems except the simplest system, more than one module is required to control the bus. For example, an I / O module may be required to read or write directly to memory, without sending data to the CPU. Because at one point is just a unit that will successfully transmit data through the bus, then take a few method of arbitration. The various methods by and large can be classified as a method centralized and distributed methods. In the centralized method, a hardware device, known as bus controllers or arbitrary, is responsible for the allocation of time on the bus. Perhaps it is CPU device module shaped or a separate section.

In a distributed method, there is no central controller. Rather, each module consists of access control logic and modules work together to put on a bus together. In the second method of arbitration, the goal is to assign a device, the CPU or I / O module, acting as master. Then the master can initiate data transfer (e.g., read or write) by using other devices, which work as slave for this particular data exchange.

C)

Page 30: MCA Assignment for MC0062-SMU

Bus timing:The timing diagram example on the right describes the Serial Peripheral Interface (SPI) Bus. Most SPI master nodes have the ability to set the clock polarity (CPOL) and clock phase (CPHA) with respect to the data. This timing diagram shows the clock for both values of CPOL as well as the values for the two data lines (MISO & MOSI) for each value of CPHA. Note that when CPHA=1 then the data is delayed by one-half clock cycle.

SPI operates in the following way:

The master determines an appropriate CPOL & CPHA value

The master pulls down the slave select (SS) line for a specific slave chip

The master clocks SCK at a specific frequency

During each of the 8 clock cycles the transfer is full duplex:

The master writes on the MOSI line and reads the MISO line

o The slave writes on the MISO line and reads the MOSI line

When finished the master can continue with another byte transfer or pull SS high to end the transfer

When a slave's SS line is high then both of its MISO and MOSI line should be high impedance so to avoid disrupting a transfer to a different slave. Prior to SS being pulled low, the MISO & MOSI lines are indicated with a "z" for high impedance. Also prior to the SS being pulled low the "cycle #" row is meaningless and is shown greyed-out.

Note that for CPHA=1 the MISO & MOSI lines are undefined until after the first clock edge and are also shown greyed-out before that.

A more typical timing diagram has just a single clock and numerous data lines

Q-12. Explain the following types of operations:

Types of Operations Data transfer

ArithmeticAns- Copy this link and write the answer: http://books.google.co.in/books?id=v2QDhNi9QswC&pg=SA3-PA6&lpg=SA3-PA6&dq=types+of+operations+in+computer+organization+*&source=bl&ots=4oSD80If_-&sig=kL-VS4SsSLoPlMwldfQL80BmZ58&hl=en&ei=fFy0TL_CAsuPcc2IjboI&sa=X&oi=book_result&ct=result&resnum=3&ved=0CCEQ6AEwAg#v=onepage&q&f=false