12
BCA REVAMPED PROGRAMME IV SEMESTER ASSIGNMENT Name : VELMURUGAN C ______________________________________________________________ Registration No. : 531210112 ______________________________________________________________ Learning Center : KUWAIT EDUCATIONAL CENTER ______________________________________________________________ Learning Center Code : 2527 ______________________________________________________________ Course/Program : BCA ______________________________________________________________ Semester : IV Semester ______________________________________________________________ Subject Code : BC0051 ______________________________________________________________ Subject Title : SYSTEM SOFTWARE ______________________________________________________________ Date of Submission : 26.02.2014 ______________________________________________________________ Marks Awarded : ______________________________________________________________ Average marks of both assignments _________________________________________________ _______________________________________________ Signature of Center Coordinator Signature of Evaluator Directorate of Distance Education

BC0051

Embed Size (px)

DESCRIPTION

ASSINGMENT

Citation preview

Page 1: BC0051

BCA REVAMPED PROGRAMMEIV SEMESTER

ASSIGNMENT

Name : VELMURUGAN C______________________________________________________________

Registration No. : 531210112______________________________________________________________

Learning Center : KUWAIT EDUCATIONAL CENTER______________________________________________________________

Learning Center Code : 2527______________________________________________________________

Course/Program : BCA______________________________________________________________

Semester : IV Semester______________________________________________________________

Subject Code : BC0051______________________________________________________________

Subject Title : SYSTEM SOFTWARE______________________________________________________________

Date of Submission : 26.02.2014______________________________________________________________

Marks Awarded :______________________________________________________________

Average marks of both assignments

_________________________________________________ _______________________________________________

Signature of Center Coordinator Signature of Evaluator

Directorate of Distance EducationSikkim Manipal University

II Floor, Syndicate House, Manipal 576 104

SMU_________________________________________

Sikkim Manipal University

Page 2: BC0051

Directorate of Distance EducationQuestion 1: Explain about Microsoft Macro Assembler and SPARC Assembler.

Ans.:

MICROSOFT MACRO ASSEMBLER:

The Microsoft Macro Assembler (MASM) is an assembler for the x86 family of microprocessors, originally produced Microsoft MS-DOS operating system. It supported a wide variety of macro facilities and structured programming idioms, including high-level constructions for looping, procedure calls and alternation (therefore, MASM is an example of a high level assembler). Later versions added the capability of producing programs for the Windows operating systems that were released to follow on from MS-DOS. MASM is one of the few Microsoft development tools for which there was no separate 16-bit and 32-bit version.

Assembler affords the programmer looking for additional performance a three-pronged approach to performance based solutions. MASM can build very small high performance executable files that are well suited where size and speed matter. When additional performance is required for other languages, MASM can enhance the performance of these languages with small fast and powerful dynamic link libraries. For programmers who work in Microsoft Visual C/C++, MASM builds modules and libraries that are in the same format so the C/C++ programmer can build modules or libraries in MASM and directly link them into their own C/C++ programs.

This allows the C/C++ programmer to target critical areas of their code in a very efficient and convenient manner, graphics manipulation, games, very high speed data manipulation and processing, parsing at speeds that most programmers have never seen, encryption, compression and any other form of information processing that is processor intensive.

SPARC ASSEMBLER:

SPARC (which stands for Scalable Processor Architecture) is an open set of technical specifications that any person or company can license and use to develop microprocessors and other semiconductor devices based on published industry standards. SPARC was invented in the labs of Sun Microsystems Inc. based upon pioneering research into Reduced Instruction Set Computing (RISC) at the University of California at Berkeley. The first standard product based on the SPARC architecture was produced by Sun and Fujitsu in 1986; Sun followed in 1987 with its first workstation based on a SPARC processor. In 1989, Sun Microsystems transferred ownership of the SPARC specifications to an independent, non-profit organization, SPARC International, which administers and licenses the technology and provides compliance testing and other services for its members. SPARC is a modern, fast, pipelined architecture. Its assembly language illustrates most of the features found in assembly languages for the variety of computer architectures which have been developed.

Page 3: BC0051

Question 2: What is Code Optimization and Code Generation?

Ans.:

CODE OPTIMIZATION:

In computer science, program optimization or software optimization is the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources. In general, a computer program may be optimized so that it executes more rapidly, or is capable of operating with less memory storage or other resources, or draw less power.

Object programs that are frequently executed should be fast and small. Certain compilers have within them a phase that tries to apply transformations to the output of the intermediate code generator, in an attempt to produce an intermediate-language version of the source program from which a faster or smaller object-language program can ultimately be produced. This phase is popularity called the optimization phase.

A good optimizing compiler can improve the target program by perhaps a factor of two in overall speed, in comparison with a compiler that generates code carefully but without using specialized techniques generally referred to as code optimization. There are two types of optimizations used:

Local Optimization Loop Optimization

CODE GENERATION:

In computer science, code generation is the process by which a compiler's code generator converts some intermediate representation of source code into a form (e.g., machine code) that can be readily executed by a machine. Sophisticated compilers typically perform multiple passes over various intermediate forms. This multi-stage process is used because many algorithms for code optimization are easier to apply one at a time, or because the input to one optimization relies on the completed processing performed by another optimization.

The code generation phase converts the intermediate code into a sequence of machine instructions. A simple-minded code generator might map the statement A: = B+C into the machine code sequence.

LOAD BADD CSTORE A

Page 4: BC0051

However, such a straightforward macro like expansion of intermediate code into machine code usually produces a target program that contains many redundant loads and stores and that utilizes the resources of the target machine inefficiently. To avoid these redundant loads and stores, a code generator might keep track of the run-time contents of registers. Knowing what quantities reside in registers, the code generator can generate loads and stores only when necessary.

Many computers have only a few high speed registers in which computations can be performed particularly quickly. A good code generator would therefore attempt to utilize these registers as efficiently as possible. This aspect of code generation, called register allocation, is particularly difficult to do optimally.

Question 3: Define the process of Bootstrapping.

Ans.:

In general parlance, bootstrapping usually refers to the starting of a self-sustaining process that is supposed to proceed without external input. In computer technology the term (usually shortened to booting) usually refers to the process of loading the basic software into the memory of a computer after power-on or general reset, especially the operating system which will then take care of loading other software as needed.

In computing, bootstrapping refers to a process where a simple system activates another more complicated system that serves the same purpose. It is a solution to the Chicken-and-egg problem of starting a certain system without the system already functioning. The term is most often applied to the process of starting up a computer, in which a mechanism is needed to execute the software program that is responsible for executing software programs (the operating system).

BOOTSTRAP LOADING:

In modern computers, the first program the computer runs after a hardware reset invariably is stored in a ROM known as bootstrap ROM. as in "pulling one's self up by the bootstraps." When the CPU is powered on or reset, it sets its registers to a known state. On x86 systems, for example, the reset sequence jumps to the address 16 bytes below the top of the system's address space. The bootstrap ROM occupies the top 64K of the address space and ROM code then starts up the computer. On IBM-compatible x86 systems, the boot ROM code reads the first block of the floppy disk into memory, or if that fails the first block of the first hard disk, into memory location zero and jumps to location zero. The program in block zero in turn loads a slightly larger operating system boot program from a known place on

Page 5: BC0051

the disk into memory, and jumps to that program which in turn loads in the operating system and starts it.

SOFTWARE BOOTSTRAPPING:

Bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor (e.g. ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high-level programming language.

COMPILER BOOTSTRAPPING:

In compiler design, a bootstrap or bootstrapping compiler is a compiler that is written in the target language, or a subset of the language, that it compiles. Examples include gcc, GHC, OCaml, BASIC, PL/I and more recently the Mono C# compiler.

Question 4: Explain the process of Relocation.

Ans.:

Relocation is the process of assigning load addresses to various parts of a program and adjusting the code and data in the program to reflect the assigned addresses. A linker usually performs relocation in conjunction with symbol resolution, the process of searching files and libraries to replace symbolic references or names of libraries with actual usable addresses in memory before running a program. Although relocation is typically done by the linker at link time, it can also be done at execution time by a relocating loader, or by the running program itself.

In computer science, relocation is the process of replacing symbolic references or names of libraries with actual usable addresses in memory before running a program. It is typically done by the linker during compilation, although it can be done at run-time by a loader. Compilers or assemblers typically generate the executable with zero as the lower-most, starting address. Before the execution of object code, these addresses should be adjusted so that they denote the correct runtime addresses.

Page 6: BC0051

Relocation is typically done in two steps:

1. Each object code has various sections like code, data, .bss etc. To combine all the objects to a single executable, the linker merges all sections of similar type into a single section of that type. The linker then assigns runtime addresses to each section and each symbol. At this point, the code (functions) and data (global variables) will have unique runtime addresses.

2. Each section refers to one or more symbols which should be modified so that they point to the correct runtime addresses.

A fixup table can also be provided in the header of the object code file. Each "fixup" is a pointer to an address in the object code that must be changed when the loader relocates the program. Fixups are designed to support relocation of the program as a complete unit. In some cases, each fixup in the table is itself relative to a base address of zero, so the fixups themselves must be changed as the loader moves through the table. In some architectures, compilers, and executable models, a fixup that crosses certain boundaries (such as a segment boundary) or that does not lie on a word boundary is illegal and flagged as an error by the linker.

Question 5: Discuss the concept of Java and Garbage Collection.

Ans.:

Java is an object oriented programming language, designed for use in the distributed environment of the Internet. It was designed to have the "look and feel" of the C++ language, but it is simpler to use than C++ and enforces an object-oriented programming model. Java can be used to create complete applications that may run on a single computer or be distributed among servers and clients in a network. It can also be used to build a small application module or applet for use as part of a Web page. Applets make it possible for a Web page user to interact with the page Applets are small tiny programs which run on internet browsers usually.

JVM (Java Virtual Machine):

The Java Virtual Machine (JVM) is an abstract computing machine. The JVM is a program that looks like a machine to the programs written to execute in it. This way, Java programs are written to the same set of interfaces and libraries. Each JVM implementation for a specific operating system, translates the Java programming instructions into instructions

Page 7: BC0051

and commands that run on the local operating system. This way, Java programs achieve platform independence.

REFERENCE COUNTING:

Reference counting is a form of automatic memory management where each object has a count of the number of references to it. An object's reference count is incremented when a reference to it is created and decremented when a reference is destroyed. The object's memory is reclaimed when the count reaches zero.

There are two major disadvantages to reference counting:

If two or more objects refer to each other, they can create a cycle whereby neither will be collected as their mutual references never let their reference counts become zero. Some GC systems (like the one in CPython) using reference counting use specific cycle-detecting algorithms to deal with this issue.

In naive implementations, each assignment of a reference and each reference falling out of scope often require modifications of one or more reference counters.

There are two references (i.e. R1 and R2) exists to an object (i.e object1). Initially at the time t1, the reference count =2 after time t2, the reference count reached to 1 (see fig.7.6).

In the fig. 7.6 the reference count is one. If the reference count reaches to 0 (zero), the Garbage collector removes the object from memory and allocates the freed space to another program or object (see fig. 7.7.) Remember, GC is part of the java environment.

MEMORY LEAK:

Page 8: BC0051

In computer science, a memory leak is a particular kind of unintentional memory consumption by a computer program where the program fails to release memory when no longer needed.

Question 6: What is the role of compilers in Error Detection and Recovery?

Ans.:

To be useful, a compiler should detect all errors in the source code and report them to the user.

These errors could be:

Lexical errors: e.g., badly formed identifiers or constants, symbols which are not part of the language, badly formed comments, etc.

Syntactic errors: chains of syntactic units that do not conform to the syntax of the source language.

Semantic errors: e.g., operations conducted on incompatible types, undeclared variables, double declaration of variable, reference before assignment, etc.

Run-time errors: errors detectable solely at run time, pointers with null value or whose value is outside allowed limits, or indexing of vectors with unsuitable indices, etc.

A good compiler…

• Reports ALL errors• Does not falsely report errors.• Does not repeatly report the same error.

Each compiler has its own way or mechanism to handle errors that occur in source program. Obviously unacceptable modes of behavior are to produce a system crash, to emit invalid output, or to merely quit on the first detected error. At the very least, a compiler should attempt to recover from each error and continue analyzing its input. It is in the method of recovery and the type of continuation that compilers differ. A simple compiler may stop all activities other than lexical and syntactic analysis after the detection of the first error.

A more complex compiler may attempt to repair the error that is transform the erroneous input into a similar but legal input on which normal processing can be resumed. An even more sophisticated compiler may attempt to correct the erroneous input by making a guess as to what the user intended. No compiler can do true correction, and there are convincing reasons why a compiler ought not to try.

Page 9: BC0051

One reason is that to do correction a compiler must know the intent of the programmer. However, the true intent is often completely obscured by the errors in the source program. Since completely accurate error correction can be done only by the programmer, it is a task most compilers should not waste time attempting.