16
1. Give the details of data types specified for VAX & IBM 370 machines. Integer An XDR signed integer is a 32-bit datum that encodes an integer in the range [-2147483648,2147483647]. The integer is represented in two's complement notation. The most and least significant bytes are 0 and 3, respectively. Integers are declared as follows: int identifier; (MSB) (LSB) -+ |byte 0 |byte 1 |byte 2 |byte 3 | INTEGER <------------32 bits------------> 3.2.Unsigned Integer An XDR unsigned integer is a 32-bit datum that encodes a nonnegative integer in the range [0,4294967295]. It is represented by an unsigned binary number whose most and least significant bytes are 0 and 3, respectively. An unsigned integer is declared as follows: unsigned int identifier; (MSB) (LSB) |byte 0 |byte 1 |byte 2 |byte 3 | UNSIGNED INTEGER <------------32 bits------------> 3.3 Enumeration Enumerations have the same representation as signed integers. Enumerations are handy for describing subsets of the integers. Enumerated data is declared as follows: enum { name-identifier = constant, ... } identifier; For example, the three colors red, yellow, and blue could be described by an enumerated type: enum { RED = 2, YELLOW = 3, BLUE = 5 } colors; It is an error to encode as an enum any other integer than those that have been given assignments in the enum declaration. 3.4 Boolean Booleans are important enough and occur frequently enough to warrant their own explicit type in the standard. Booleans are declared as follows: bool identifier; This is equivalent to: enum { FALSE = 0, TRUE = 1 } identifier; 3.5 Hyper Integer and Unsigned Hyper Integer The standard also defines 64-bit (8-byte) numbers called hyper integer and unsigned hyper integer. Their representations are the obvious extensions of integer and unsigned integer defined above. They are represented in two's complement notation. The most and least significant bytes are 0 and 7, respectively. Their declarations: hyper identifier; unsigned hyper identifier; (MSB) (LSB) |byte 0 | byte 1 |byte 2 |byte 3 |byte 4 |byte 5 |byte 6 |byte 7 | <----------------------------64 bits----------------------------> HYPER INTEGER UNSIGNED HYPER INTEGER 3.6 Floating-point The standard defines the floating-point data type "float" (32 bits or 4 bytes). The encoding used is the IEEE standard for normalized single- precision floating-point numbers [3]. The following three fields describe the single-precision floating-point number:

Bt0068 computer organization and architecture 2

Embed Size (px)

Citation preview

Page 1: Bt0068 computer organization and architecture 2

1. Give the details of data types specified for VAX & IBM 370 machines.

Integer An XDR signed integer is a 32-bit datum that encodes an integer in the range [-2147483648,2147483647]. The integer is represented in two's complement notation. The most and least significant bytes are 0 and 3, respectively. Integers are declared as follows: int identifier; (MSB) (LSB) -+ |byte 0 |byte 1 |byte 2 |byte 3 | INTEGER <------------32 bits------------> 3.2.Unsigned Integer An XDR unsigned integer is a 32-bit datum that encodes a nonnegative integer in the range [0,4294967295]. It is represented by an unsigned binary number whose most and least significant bytes are 0 and 3, respectively. An unsigned integer is declared as follows: unsigned int identifier; (MSB) (LSB) |byte 0 |byte 1 |byte 2 |byte 3 | UNSIGNED INTEGER <------------32 bits------------> 3.3 Enumeration Enumerations have the same representation as signed integers. Enumerations are handy for describing subsets of the integers. Enumerated data is declared as follows: enum { name-identifier = constant, ... } identifier; For example, the three colors red, yellow, and blue could be described by an enumerated type: enum { RED = 2, YELLOW = 3, BLUE = 5 } colors; It is an error to encode as an enum any other integer than those that have been given assignments in the enum declaration. 3.4 Boolean Booleans are important enough and occur frequently enough to warrant their own explicit type in the standard. Booleans are declared as follows: bool identifier; This is equivalent to: enum { FALSE = 0, TRUE = 1 } identifier; 3.5 Hyper Integer and Unsigned Hyper Integer The standard also defines 64-bit (8-byte) numbers called hyper integer and unsigned hyper integer. Their representations are the obvious extensions of integer and unsigned integer defined above. They are represented in two's complement notation. The most and least significant bytes are 0 and 7, respectively. Their declarations: hyper identifier; unsigned hyper identifier; (MSB) (LSB) |byte 0 |byte 1 |byte 2 |byte 3 |byte 4 |byte 5 |byte 6 |byte 7 | <----------------------------64 bits----------------------------> HYPER INTEGER UNSIGNED HYPER INTEGER 3.6 Floating-point The standard defines the floating-point data type "float" (32 bits or 4 bytes). The encoding used is the IEEE standard for normalized single-precision floating-point numbers [3]. The following three fields describe the single-precision floating-point number: S: The sign of the number. Values 0 and 1 represent positive and negative, respectively. One bit. E: The exponent of the number, base 2. 8 bits are devoted to this field. The exponent is biased by 127. F: The fractional part of the number's mantissa, base 2. 23 bits are devoted to this field. Therefore, the floating-point number is described by: (-1)**S * 2**(E-Bias) * 1.F It is declared as follows: float identifier; |byte 0 |byte 1 |byte 2 |byte 3 | SINGLE-PRECISION S| E | F | FLOATING-POINT NUMBER 1|<- 8 ->|<-------23 bits------>| <------------32 bits------------> Just as the most and least significant bytes of a number are 0 and 3, the most and least significant bits of a single-precision floating- point number are 0 and 31. The beginning bit (and most significant bit) offsets of S, E, and F are 0, 1, and 9, respectively. Note that these numbers refer to the mathematical positions of the bits, and NOT to their actual physical locations (which vary from medium to medium). The EEE specifications should be consulted concerning the encoding for signed zero, signed infinity (overflow), and denormalized numbers (underflow) [3]. According to IEEE

Page 2: Bt0068 computer organization and architecture 2

specifications, the "NaN" (not a number) is system dependent and should not be used externally. 3.7 Double-precision Floating-point The standard defines the encoding for the double-precision floating- point data type "double" (64 bits or 8 bytes). The encoding used is the IEEE standard for normalized double-precision floating-point numbers [3]. The standard encodes the following three fields, which describe the double-precision floating-point number: S: The sign of the number. Values 0 and 1 represent positive and negative, respectively. One bit. E: The exponent of the number, base 2. 11 bits are devoted to this field. The exponent is biased by 1023. F: The fractional part of the number's mantissa, base 2. 52 bits are devoted to this field. Therefore, the floating-point number is described by: (-1)**S * 2**(E-Bias) * 1.F It is declared as follows: double identifier; + |byte 0|byte 1|byte 2|byte 3|byte 4|byte 5|byte 6|byte 7| S| E | F | 1|<--11-->|<-----------------52 bits------------------->| <-----------------------64 bits-------------------------> DOUBLE-PRECISION FLOATING-POINT Just as the most and least significant bytes of a number are 0 and 3, the most and least significant bits of a double-precision floating- point number are 0 and 63. The beginning bit (and most significant bit) offsets of S, E , and F are 0, 1, and 12, respectively. Note that these numbers refer to the mathematical positions of the bits, and NOT to their actual physical locations (which vary from medium to medium). The IEEE specifications should be consulted concerning the encoding for signed zero, signed infinity (overflow), and denormalized numbers (underflow) [3]. According to IEEE specifications, the "NaN" (not a number) is system dependent and should not be used externally. 3.8 Fixed-length Opaque Data At times, fixed-length uninterpreted data needs to be passed among machines. This data is called "opaque" and is declared as follows: opaque identifier[n]; where the constant n is the (static) number of bytes necessary to contain the opaque data. If n is not a multiple of four, then the n bytes are followed by enough (0 to 3) residual zero bytes, r, to make the total byte count of the opaque object a multiple of four. 0 1 ... | byte 0 | byte 1 |...|byte n-1| 0 |...| 0 | |<-----------n bytes---------->|<------r bytes------>| |<-----------n+r (where (n+r) mod 4 = 0)------------>| FIXED-LENGTH OPAQUE 3.9 Variable-length Opaque Data The standard also provides for variable-length (counted) opaque data, defined as a sequence of n (numbered 0 through n-1) arbitrary bytes to be the number n encoded as an unsigned integer (as described below), and followed by the n bytes of the sequence. Byte m of the sequence always precedes byte m+1 of the sequence, and byte 0 of the sequence always follows the sequence's length (count). If n is not a multiple of four, then the n bytes are followed by enough (0 to 3) residual zero bytes, r, to make the total byte count a multiple of four. Variable-length opaque data is declared in the following way: opaque identifier<m>; or opaque identifier<>; The constant m denotes an upper bound of the number of bytes that the sequence may contain. If m is not specified, as in the second declaration, it is assumed to be (2**32) - 1, the maximum length. The constant m would normally be found in a protocol specification. For example, a filing protocol may state that the maximum data transfer size is 8192 bytes, as follows: opaque filedata<8192>; 0 1 2 3 4 5 ... | length n |byte0|byte1|...| n-1 | 0 |...| 0 | |<-------4 bytes------->|<------n bytes------>|<---r bytes--->| |<----n+r (where (n+r) mod 4 = 0)---->| VARIABLE-LENGTH OPAQUE It is an error to encode a length greater than the

Page 3: Bt0068 computer organization and architecture 2

maximum described in the specification. 3.10 String The standard defines a string of n (numbered 0 through n-1) ASCII bytes to be the number n encoded as an unsigned integer (as described above), and followed by the n bytes of the string. Byte m of the string always precedes byte m+1 of the string, and byte 0 of the string always follows the string's length. If n is not a multiple of four, then the n bytes are followed by enough (0 to 3) residual zero bytes, r, to make the total byte count a multiple of four. Counted byte strings are declared as follows: string object<m>; or string object<>; The constant m denotes an upper bound of the number of bytes that a string may contain. If m is not specified, as in the second declaration, it is assumed to be (2**32) - 1, the maximum length. The constant m would normally be found in a protocol specification. For example, a filing protocol may state that a file name can be no longer than 255 bytes, as follows: string filename<255>; 0 1 2 3 4 5 ... | length n |byte0|byte1|...| n-1 | 0 |...| 0 | |<-------4 bytes------->|<------n bytes------>|<---r bytes--->| |<----n+r (where (n+r) mod 4 = 0)---->| STRING It is an error to encode a length greater than the maximum described in the specification. 3.11 Fixed-length Array Declarations for fixed-length arrays of homogeneous elements are in the following form: type-name identifier[n]; Fixed-length arrays of elements numbered 0 through n-1 are encoded by individually encoding the elements of the array in their natural order, 0 through n-1. Each element's size is a multiple of four bytes. Though all elements are of the same type, the elements may have different sizes. For example, in a fixed-length array of strings, all elements are of type "string", yet each element will vary in its length. | element 0 | element 1 |...| element n-1 | |<--------------------n elements------------------->| FIXED-LENGTH ARRAY 3.12 Variable-length Array Counted arrays provide the ability to encode variable-length arrays of homogeneous elements. The array is encoded as the element count n (an unsigned integer) followed by the encoding of each of the array's elements, starting with element 0 and progressing through element n- 1. The declaration for variable-length arrays follows this form: type-name identifier<m>; or type-name identifier<>; The constant m specifies the maximum acceptable element count of an array; if m is not specified, as in the second declaration, it is assumed to be (2**32) - 1. 0 1 2 3 | n | element 0 | element 1 |...|element n-1|<-4 bytes->|<--------------n elements------------->| COUNTED ARRAY It is an error to encode a value of n that is greater than the maximum described in the specification.

2. Discuss various number representations in a computer system.

The binary numeral system, or base-2 number system, represents numeric values using two symbols, 0 and 1. More specifically, the usual base-2 system is a positional notation with a radix of 2. Owing to its straightforward implementation in digital electronic circuitry using logic gates, the binary system is used internally by all modern computers.Counting in binary is similar to counting in any other number system. Beginning with a single digit, counting proceeds through each symbol, in increasing order. Decimal counting uses the symbols 0 through 9, while binary only uses the symbols 0 and 1

Page 4: Bt0068 computer organization and architecture 2

l Hexadecimal

Binary may be converted to and from hexadecimal somewhat more easily. This is because the radix of the hexadecimal system (16) is a power of the radix of

the binary system (2). More specifically, 16 = 24, so it takes four digits of binary to represent one digit of hexadecimal

l Octal

Binary is also easily converted to the octal numeral system, since octal uses a

radix of 8, which is a power of two (namely, 23, so it takes exactly three binary digits to represent an octal digit). The correspondence between octal and binary numerals is the same as for the first eight digits of hexadecimal in the table above. Binary 000 is equivalent to the octal digit 0, binary 111 is equivalent to octal 7, and so forth.

2. The reflected binary code, also known as Gray code after Frank Gray, is a binary numeral system where two successive values differ in only one bit.The problem with natural binary codes is that, with real (mechanical) switches, it is very unlikely that switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce, the transition might look like 011 — 001 — 101 — 100. When the switches appear to be in position 001, the observer cannot tell if that is the "real" position 001, or a transitional state between two other positions. If the output feeds into a sequential system (possibly via combinational logic) then the sequential system may store a false value.

3. Explain the addition of two floating point numbers with examples.

4. Here's how to add floating point numbers.

1 First, convert the two representations to scientific notation. Thus, we explicitly represent the hidden 1.

2 In order to add, we need the exponents of the two numbers to be the same. We do this by rewritingY. This will result in Y being not normalized, but value is equivalent to the normalizedY.

3 Addx - ytoY's exponent. Shift the radix point of the mantissa (signficand)Yleft byx - yto compensate for the change in exponent.

4 Add the two mantissas ofXand the adjustedYtogether.5 If the sum in the previous step does not have a single bit of value 1, left

of the radix point, then adjust the radix point and exponent until it does.6 Convert back to the one byte floating point representation.

l Example 1

Let's add the following two numbers:

Variable

sign exponent

fraction

X 0 1001 110

Page 5: Bt0068 computer organization and architecture 2

Y 0 0111 000Here are the steps again:

1 First, convert the two representations to scientific notation. Thus, we explicitly represent the hidden 1.

2 In normalized scientific notation,Xis1.110 x 22, andYis1.000 x 20.

3 In order to add, we need the exponents of the two numbers to be the same. We do this by rewritingY. This will result in Y being not normalized, but value is equivalent to the normalizedY.

4 Add x - y to Y's exponent. Shift the radix point of the mantissa (signficand) Y left by x - y to compensate for the change in exponent.

5 The difference of the exponent is 2. So, add 2 to Y's exponent, and

shift the radix point left by 2. This results in 0.0100 x 22. This is still equivalent to the old value of Y. Call this readjusted value, Y'

6 Add the two mantissas of X and the adjusted Y' together.

7 We add 1.110two to 0.01two. The sum is: 10.0two. The

exponent is still the exponent of X, which is 2.

8 If the sum in the previous step does not have a single bit of value 1, left of the radix point, then adjust the radix point and exponent until it does.

9 In this case, the sum, 10.0two, has two bits left of the radix point.

We need to move the radix point left by 1, and increase the exponent by 1 to compensate.

10 This results in: 1.000 x 23.

11 Convert back to the one byte floating point representation.

Sum sign exponent

fraction

X + Y 0 1010 000

l

3. Discuss the organization of main memory.

This illustrates the essential features of computer memory, which behaves in a very different way to human memory (i.e., associative or content addressable memory). We can regard computer memory as a black box with three ports, called an address port, a data port, and a control port (a port is the point at which information enters or leaves a system). The address port is used to tell the memory which location is to be accessed. During a read cycle, information from the memory is sent to the computer via its data port, and in a write cycle information is loaded into the memory via its data port. The external system (i.e., the computer) uses the control port to tell the memory whether to carry out

Page 6: Bt0068 computer organization and architecture 2

a read cycle or a write cycle.

We called this memory a black box because we don’t care about its internal organization or how it works—we’re interested only in what it does. The address indicates which of the locations (or memory cells) is to be accessed; for example, a memory with 216 = 65,536 locations has a 16-bit address that accesses location 0 (address 0000000000000000) to location 65,535 (address 1111111111111111). Each memory cell contains an n-bit binary value, where n is the number of bits in the words processed by the CPU.Once again, consider the difference between the computer memory and the associative memory. A location within the computer memory must be accessed explicitly by providing the memory with an address; that is, you ask the memory what is stored in location x. You don’t provide an associative memory with an address (indeed, the very concept of an address is meaningless). Suppose you want to find whether an associative memory contains red or blue gizmos. You apply the key “gizmo” to the memory and examine the response (i.e., the memory indicates all elements that have a match with gizmo). If you have to search a conventional computer memory for a data item, you have to search the memory element by element until you find the data you require.

5.5.5.5.5.5.5.5.5.5.5.5.5.5.5.5.5.5.

Explain various replacement algorithms.

In a computer operating system that uses paging for virtual memory memory management, page replacement algorithms decide which memory pages to page out (swap out, write to disk) when a page of memory needs to be allocated. Paging happens when a page fault occurs and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.

Page 7: Bt0068 computer organization and architecture 2

When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from disk), and this involves waiting for I/O completion. This determines the quality of the page replacement algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.The not recently used (NRU) page replacement algorithm is an algorithm that favours keeping pages in memory that have been recently used. This algorithm works on the following principle: when a page is referenced, a referenced bit is set for that page, marking it as referenced. Similarly, when a page is modified (written to), a modified bit is set. The setting of the bits is usually done by the hardware, although it is possible to do so on the software level as well.The simplest page-replacement algorithm is a FIFO algorithm. The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little book-keeping on the part of the operating system. The idea is obvious from the name - the operating system keeps track of all the pages in memory in a queue, with the most recent arrival at the back, and the earliest arrival in front. When a page needs to be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. This algorithm experiencesBelady's anomaly.The least recently used page (LRU) replacement algorithm, though similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval. LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good asAdaptive Replacement Cache), it is rather expensive to implement in practice. There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible.

6.Discuss the different categories of instructions.

When a computer program is executed the computer's Central Processor Unit (CPU) examines and executes one instruction at a time. At a hardware level, the instruction is the smallest complete unit of work that the computer can perform. This section examines the types of instructions, and some of the internal contents of instructions.

Each computer or CPU potentially has its own unique set of instructions, referred to as an instruction set. Since many upgraded or new computers are extensions of existing machines, many CPU chips are downward compatible. This means that their instruction set extends but also supports the instructions in an older set.

There are several common instruction sets widely shared in the Computer world. IBM mainframes, starting with the IBM 360 series in 1964 have used the same form and extended instruction set. In the PC world, there are two widely used sets. The Motorola 6800 generic set has been used in Apple MacIntosh and Sun workstations. The Intel X86 generic set is widely used in machines

Page 8: Bt0068 computer organization and architecture 2

known as Intel compatibleEach instruction has an identifying operation code or opcode. Generally this is one byte long, and occupies the first or only byte of the instruction. The remainder of the instruction is made up of operands. Any given instruction contains from 0 to 3 operands.

l Op Codes

Each operation code is a unique 8 bit byte, or a series of eight 0 or 1 digits. These are usually expressed in Hexadecimal notation. The Op Code is interpreted by a CPU circuit called the decod which determines the actions to be taken.

l Operands

There are several types of operands that are interpreted by the CPU. These can be viewed as divided into major types:

Immediate Operands contain control data, flags, or very short data values.

Direct Operands contain an address, register or input/output device identifier.

Indirect or Relative operands refer to memory storage and contain a relative address, modified by the contents of a particular register.

Even when a program is written in a high level language, certain debugging tools will show and detail the instructions that are generated by the compiler.

7. Explain various operations of ALU.

8. The arithmetic logic unit (ALU) which is an integral part of the Data Translation frame grabber is capable of a number of simple arithmetic and logic operations. The ALU has two input arms, labelled "A" and "B". Eight bit intensity information is produced by the analog to digital converter (A/D) and is presented, via the input lookup table (ILUT), to the "A" arm of the ALU. Similarly eight bit intensity information is transferred from a specified frame buffer to the "B" arm of the ALU. Note that the "B" arm is not translated by a lookup table. The two eight bit inputs to the ALU are combined in the selected manner to produce a nine bit result. This nine bit result is then mapped onto eight bit space through the selected result lookup table (RLUT). The full range of such ALU operations by specifying the buffers, look up tables and ALU operations are supported (zoom and pan on the "B" mask buffer are not supported).The user specifies which buffer(s) are to be filled with images acquired using the ALU. Any valid buffer may be used, remembering that buffers in extended memory will be acquired first to buffer 0 and then copied to extended memory.The time offset at which each buffer is to be acquired must be specified. These time offsets must be specified in ascending order as DigImage does not control the VTR for this facility. The normal time format minutes:seconds is used (see the general help given by <shift><f1> for more details).f another buffer in the sequence is required, then <Y> should be used. If all the buffers required have been specified, then <N> will let DigImage proceed. Up to 32 buffers may be specified in the sequence. Note DigImage does not check to see if a given buffer is specified more than once.This option specifies the

Page 9: Bt0068 computer organization and architecture 2

buffer to be presented to the "B" input arm of the ALU. If -1 is specified, then the buffer being acquired will be its own mask, even if an extended memory buffer is being acquired. If another value is given, then it must correspond to an onboard buffer; the same buffer will be used for every image in the sequence (though the buffer's contents may be changed by acquiring to it).This input will be requested only if the mask buffer is specified as -1 (ie. set to the input buffer), thus providing a method of repeated self-referencing.This selection occurs only if the mask buffer is the input buffer (ie. -1 specified for mask buffer) and allows the buffer to be erased before any image is acquired to it. Yes (<Y>) causes the buffer to be erased, while no (<N>) preserves the present contents.

9.Discuss the different formats of floating point numbers.arithmetic formats: sets of binary and decimal floating-point data, which consist of finite numbers (including signed zeros and subnormal numbers), infinities, and special 'not a number' values (NaNs)

interchange formats: encodings (bit strings) that may be used to exchange floating-point data in an efficient and compact form

rounding algorithms: methods to be used for rounding numbers during arithmetic and conversions

operations: arithmetic and other operations on arithmetic formats exception handling: indications of exceptional conditions (such

as division by zero, overflow, etc.)

The standard also includes extensive recommendations for advanced exception handling, additional operations (such as trigonometric functions), expression evaluation, and for achieving reproducible results.

8. A given format comprises:

Finite numbers, which may be either base 2 (binary) or base 10 (decimal). Each finite number is most simply described by three integers: s= a sign (zero or one), c= a significand (or 'coefficient'), q= an exponent. The numerical value of a finite number is  (?1)s × c × bqwhere b is the base (2 or 10). For example, if the sign is 1 (indicating negative), the significand is 12345, the exponent is ?3, and the base is 10, then the value of the number is ?12.345.

Two infinities: +? and ??.

Two kinds of NaN (quiet and signaling). A NaN may also carry a payload, intended for diagnostic information indicating the source of the NaN. The sign of a NaN has no meaning, but it may be predictable in some circumstances.

The possible finite values that can be represented in a given format are determined by the base (b), the number of digits in the significand (precision, p), and the exponent parameter emax:

c must be an integer in the range zero through bp?1 (e.g., if b=10 and p=7 then c is 0 through 9999999)

q must be an integer such that 1?emax ? q+p?1 ? emax (e.g., if p=7 and emax=96 then q is ?101 through 90).

Page 10: Bt0068 computer organization and architecture 2

Hence (for the example parameters) the smallest non-zero positive number that can be represented is 1×10?101 and the largest is 9999999×1090 (9.999999×1096), and the full range of numbers is ?9.999999×1096 through 9.999999×1096. The numbers closest to the inverse of these bounds (?1×10?95 and 1×10?95) are considered to be the smallest (in magnitude) normal numbers; non-zero numbers between these smallest numbers are called subnormal numbers.

Zero values are finite values with significand 0. These are signed zeros, the sign bit specifies if a zero is +0 (positive zero) or ?0 (negative zero).

the standard defines five basic formats, named using their base and the number of bits used to encode them. A conforming implementation must fully implement at least one of the basic formats. There are three binary floating-point basic formats (which can be encoded using 32, 64 or 128 bits) and two decimal floating-point basic formats (which can be encoded using 64 or 128 bits). The binary32 and binary64 formats are the single and double formats of IEEE 754-1985.A format that is just to be used for arithmetic and other operations need not have an encoding associated with it (that is, an implementation can use whatever internal representation it chooses); all that needs to be defined are its parameters (b, p, and emax). These parameters uniquely describe the set of finite numbers (combinations of sign, significand, and exponent) that it can represent.'

10.Explain the characteristics of memory system.

The term "memory" applies to any electronic component capable of temporarily storing data. There are two main categories of memories:

internal memory that temporarily memorises data while programs are running. Internal memory uses microconductors, i.e. fast specialised electronic circuits. Internal memory corresponds to what we callrandom access memory (RAM).

auxiliary memory (also called physical memory or external memory) that stores information over the long term, including after the computer is turned off. Auxiliary memory corresponds to magnetic storage devices such as the hard drive, optical storage devices such as CD-ROMs and DVD-ROMs, as well as read-only memories.

Technical Characteristics

The main characteristics of a memory are:

Capacity, representing the global volume of information (in bits) that the memory can store

Access time, corresponding to the time interval between the read/write request and the availability of the data

Cycle time, representing the minimum time interval between two successive accesses

Throughput, which defines the volume of information exchanged per unit of time, expressed in bits per second

Page 11: Bt0068 computer organization and architecture 2

Non-volatility, which characterises the ability of a memory to store data when it is not being supplied with electricity

The ideal memory has a large capacity with restricted access time and cycle time, a high throughput and is non-volatile. 

However, fast memories are also the most expensive. This is why memories that use different technologies are used in a computer, interfaced with each other and organised hierarchically. 

The fastest memories are located in small numbers close to the processor. Auxiliary memories, which are not as fast, are used to store information permanently.

Types of Memories

l Random Access Memory

Random access memory, generally called RAM is the system's main memory, i.e. it is a space that allows you to temporarily store data when a program is running. 

Unlike data storage on an auxiliary memory such as a hard drive, RAM is volatile, meaning that it only stores data as long as it supplied with electricity. Thus, each time the computer is turned off, all the data in the memory are irremediably erased.

l Read-Only Memory

Read-only memory, called ROM, is a type of memory that allows you to keep the information contained on it even when the memory is no longer receiving electricity. Basically, this type of memory only has read-only access. However, it is possible to save information in some types of ROM memory.

l Flash Memory

Flash memory is a compromise between RAM-type memories and ROM memories. Flash memory possesses the non-volatility of ROM memories while providing both read and write access However, the access times of flash memories are longer than the access times of RAM.

11.Discuss the physical characteristics of DISK.

Each disk platter has a flat circular shape. Its two surfaces are covered with a magnetic material and information is recorded on the surfaces. The platter of hard disks are made from rigid metal or glass, while floppy disks are made from flexible material.

The disk surface is logically divided into tracks, which are subdivided into sectors. A sector (varying from 32 bytes to 4096

Page 12: Bt0068 computer organization and architecture 2

bytes, usually 512 bytes) is the smallest unit of information that can be read from or written to disk. There are 4-32 sectors per track and 20-1500 tracks per disk surface.

The arm can be positioned over any one of the tracks. The platter is spun at high speed. To read information, the arm is positioned over the correct track. When the data to be accessed passes under the head,

the read or write operation is performed.A disk typically contains multiple platters (see Figure 10.2). The read-write heads of all the tracks are mounted on a single assembly called a disk arm, and move together. Multiple disk arms are moved as a unit by the actuator. Each arm has two heads, to read disks above and below it. The set of tracks over which the heads are located forms

a cylinder. This cylinder holds that data that is accessible within the disk

latency time. It is clearly sensible to store related data in the same or adjacent

cylinders.Disk platters range from 1.8" to 14" in diameter, and 5"1/4 and 3"1/2 disks dominate due to the lower cost and faster seek time than do larger disks, yet they provide high storage capacity.A disk controller interfaces between the computer system and the actual hardware of the disk drive. It accepts commands to r/w a sector, and initiate actions. Disk controllers also attach checksums to each sector to check read error.Remapping of bad sectors: If a controller detects that a sector is damaged when the disk is initially formatted, or when an attempt is made to write the sector, it can logically map the sector to a different physical location.

(Small Computer System Interconnect) is commonly used to connect disks to PCs and workstations. Mainframe and server systems usually have a faster and more expensive bus to connect to the disks.Head crash: why cause the entire disk failing (?).A fixed dead disk has a separate head for each track -- very many heads, very expensive. Multiple disk arms: allow more than one track to be accessed at a time. Both were used in high performance mainframe systems but are relatively rare today.