123
Basic Computer B.pharm PU CSC 191 (Credit hours 3) Computer Science (Introductory) B. Pharm., First Year, First Semester Course Objectives: The objective of the course is to provide the students with a general view of computer architecture, its operation and application, familiarize the students with the existing technologies, and provide them with hands on experience on personal computers. Course Contents: 1. Introduction to Computers 3 hours History of Computers, Classification of Computers, Functioning of Computers, Computer Hardware, Software, Firmware 2. Number System 6 hours Decimal number system, Binary number system, Hexadecimal number system, Octal number system, Conversion of a number from one system to other, Addition and Subtraction of binary numbers, Compliments, Subtraction by 2’s compliment method 3. Boolean Algebra and Logic Gates 5 hours Introduction, Basic operations of Boolean algebra, DeMorgan’s Theorem, Boolean variable and function, Boolean postulates, Dual and compliments of Boolean expression, SOP and POS standard forms, Canonical forms of Boolean expression, Simplification of Boolean expressions by Karnaugh method, Logic Gates-AND, OR, NOT, NOR, XOR, XNOR 4. Arithmetic Logic Unit and Memory Element 2 hours Half adder, Full adder, Flip-flop, R-S flip-flop 5. Memory 3 hours 1

Basic Computer B.pharm PU - wapnepal.com.np€¦  · Web viewAT (Advanced Technology) computers are the new technology computers. They are faster in processing (more than 2 GHZ)

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Basic Computer B.pharm PU

Basic Computer B.pharm PU

CSC 191 (Credit hours 3)

Computer Science (Introductory)

B. Pharm., First Year, First Semester

Course Objectives:

The objective of the course is to provide the students with a general view of computer architecture, its operation and application, familiarize the students with the existing technologies, and provide them with hands on experience on personal computers.

Course Contents:

1. Introduction to Computers3 hours

History of Computers, Classification of Computers, Functioning of Computers, Computer Hardware, Software, Firmware

1. Number System6 hours

Decimal number system, Binary number system, Hexadecimal number system, Octal number system, Conversion of a number from one system to other, Addition and Subtraction of binary numbers, Compliments, Subtraction by 2’s compliment method

1. Boolean Algebra and Logic Gates5 hours

Introduction, Basic operations of Boolean algebra, DeMorgan’s Theorem, Boolean variable and function, Boolean postulates, Dual and compliments of Boolean expression, SOP and POS standard forms, Canonical forms of Boolean expression, Simplification of Boolean expressions by Karnaugh method, Logic Gates-AND, OR, NOT, NOR, XOR, XNOR

1. Arithmetic Logic Unit and Memory Element2 hours

Half adder, Full adder, Flip-flop, R-S flip-flop

1. Memory3 hours

Classification, RAM, ROM, Floppy disk, Hard disk

1. Input Output Devices and Computer Network5 hours

Role of input and output devices, Keyboard, Mouse, Scanners, MICR, Video terminals, Printers, Plotters, Digital to analog conversion, Introduction to computer network, Sharing, Network types

1. Word Processing4 hours

Introduction, Concept of file, Inputting the text, Formatting, Inserting the files and Symbols, Mail merge facility, Grammar checking, Auto correct feature (MS-Word is to be used)

1. Spreadsheet Analysis4 hours

Introduction to spreadsheets, Workbook and worksheet, Formula, Formatting and Graphics (MS-Excel is to be used)

1. Database Management4 hours

Data, Database, Input, Processing, Storage, Output (MS-Access is to be used)

1. Internet and Multimedia4 hours

Introduction to Internet, e-mail, Introduction to slide, Making a presentation (MS-PowerPoint is to be used)

1. Programming Concepts5 hours

Difference between a computer and calculator, Algorithm, Flowchart, Program, Programming language

Reference Books:

1. B. Ram: Computer Fundamentals, 1999, Willey Eastern Publication, New Delhi.

1. O. S. Lawrence: Schaum’s Outline of Computers & Business, 2000, Mc-Grew Hill International., New Delhi.

1. Suresh Basandra: Computer Systems Today, 1999, Galgotia Publication, New Delhi.

1. M. Busby and R. A. Stultz: Office 2000, 2000, BPB Publication, New Delhi.

Computer

A computer is an electronic machine operating under human/user control that accepts data using some input devices performs certain operations and displays the results in output devices.

Computers are used in wide areas of fields like house, schools, colleges, hospitals, business, and industries. They are used to accomplished job in fast and efficient way. Computer is devices that can not do noting alone without certain programs and instruction. A program is a set of code /instructions which causes a computer to perform particular operations.

Computer System

The computer is called computer system because of different components work together to produce the desired result to the user. The various components of computers of computer systems are as follows:

Hardware: All the physical components of the computer system are called hardware such as Monitor, CPU, and Mouse etc.

Software: The collection of instruction or logical components that instruct the hardware to perform certain task is called software.

Producer: The way of operating computer is called procedure.

Data /instruction: The raw data under which computer work and produce the useful information.

Connectivity: When two or more computers and other peripherals are connected to communicate in the computer system.

Computer Architecture

Computer architecture is the theory behind the design of a computer. The digital computer can be divided into 3 major sections are CPU, Memory and I-O unit. The simple architecture of computers are as follows. The CPU and other Units are linked with the parallel communication channels data channels, address channels and controls channels are called bus/cable.

Processor (CPU): The processors is a computer chip( Heart of computer) that receives the data input form the input devices , processes the data in some way by performing calculations or reorganizing it, stores the results in memory until it sends them to an output devices or stores them in a backup storage devices. The CPU is divided into 3 major sections are follows:

Control Unit (CU): The control unit manages program instruction, so that data is received form input devices and send to output devices at right time. It sends output control signals at a speed that is measured in meghthertz (MHz).

Arithmetic and Logical Unit:

The arithmetic and logical Unit carries out all the arithmetic and logical operations that are needed to produces data.

Register Unit:

It is special temporary storage location of CPU. Registers are very quickly accept, store and transfer data and instruction that are being used currently.

Bus: A bus is simple a parallel communication pathway over which information and signals are transferred between several computer components.

Address bus: The address bus is used to carry address signals for addressing data in different location in computer memory. So that it is Unit directional bus.

Data bus: The data bus is used to communicate data form CPU and other internal unit of computer system. Data bus is bi-directional.

Control bus: The control signals transmitted on the control bus to ensure that proper timing does occurs.

Affecting Factors for Speed of CPU

System Clock Rate: It is the rate of an electronic pulse used to synchronize processing and measured in MHZ ( 1 MHz= 1 million cycles per second).

Bus Width: The amount of data the CPU can transmit at a time to main memory and to input and output devices. An 8 bit bus moves 8 bits of data at a time. They are 8, 16, 32, 64, and 128 so far.

Word Size: The amount of data than can be processed by the CPU at one time. An 8 bit processor can manipulate 8 bits at a time. Processors can 8, 16, 32, 64 and so far. The bigger the number means the faster the computer system.

Characteristics of Computer

Speed: Computer performs complex calculation at a very high speed. The speed of computer at performing a single operation can measure in terms of Millisecond, Microsecond, Nanosecond and Picoseconds.

1/1000(10-3) sec-1 Millisecond

1/1000000(10-6)- 1 Microsecond

1/1000000000(10-9)-1 Nanosecond

1/1000000000000(10-12) Picoseconds

Storage: A large amount of data can store in computer memory. The storing capacity is measured in terms of Bytes, Kilobytes, Megabytes, and gigabytes and Terabytes

1024 Bytes= 1 Kilobytes (KB)

1024 Kilobytes =1 Megabytes (MB)

1024 Megabytes= 1 Gigabytes (GB)

1024 GB= 1 Terabytes (TB)

Accuracy: A computer can perform all the calculation and comparison accurately. Sometimes errors may produce by computers due to the fault in machine or due to Bugs in the programs. If input data are not correct, this may also lead to incorrect output. The computer follows the simple rules of GIGO (Garbage in, Garbage Out).

Reliability: Computer never tired, bored or lazy to do task i.e the computer is capable of performing task repeatedly at the same level of speed and accuracy even if it has to carry complex operations for a long period of time. Computers are quite capable to performing automatic, once the process is given to computer.

Automatic: Computer is an automatic machine. Everything that is given to computers is processed and dome by computers automatically according to the instruction proved.

Versatility: A computer has wide range of application areas ie computers can do many types of jobs. IT can perform operations ranging form single mathematical calculations to high complex and logical evaluations for any extended periods of times. Some of the areas of computers applications are Educations, Sciences. Technology, Business, Research etc.

Diligence: A computer can perform respective tasks without being bored, tired and losing concentration. It can continuously work for several hours without human intervention after the data and program are feed to it. They can handle complicated and complex task. There is not aging effect on computer ie efficiency does not decrease over the years of use.

Limitations of Computers

1. Sometime the failure in devices and programs can produce unreliable information.

2. Computer is dull machine. It does not have intelligence on it.

3. Computer can not draw conclusion without going through all intermediate steps.

Historical Development of Computer

The computer which is one of the most advanced discoveries of making has got a long history. Around 3000 years before the birth of Jesus Christ, there were no any kind of number system. So people had to remember a lot of information. They felt the need to count the cattle. Then they started counting using their fingers. But the limited number of finger had made difficult for them to remember more facts. So they used stone for counting. As result around fifth century Hindu Philosopher could develop new methods of counting using numbers 1 to 9. In 8th century Alkhawarism of Iraq developed 0. Since there are ten digits these number systems method was called decimal system.

Mechanical Era/ the Age of Mechanical Calculator

The most significant early computing tools is ABACUS, was developed in 1000-1500 AD, a wooden rack holding parallel rods on which different sizes balls are stung. The arithmetic operations can be carried out with the help of breads on the wire. The frame consists of upper parts and lower parts. The upper part is called heaven and lower is called earth. Each part consists of five beads on earth part and heaven parts consist of two beads. This is used for addition and subtraction. In 1500, Leonardo da Vinchi developed mechanical calculator, that was very heavy. A Scottish mathematician, John Napier (1614) invented another calculator which was made of bone had more functionality add and multiplication of numbers. These are analog computers which have been replaced modern times by pocket calculators. The significant evolution of computing system was the invention of by French Mathematician, Blaise Pascal (1642). La Pascal machine could also multiply, divide and find square root. In 1822 a professors of mathematician, Thomas (Charles Xavier Thomas) developed a machine called differential engine was the first commercially mechanical calculators. Charles Babbage (1792- 1871) at Cambridge was developed the first digital computer. By 1822 he built an automatic mechanical calculator called difference engine. Unfortunately, Babbage analytical engine was never completed because its design required fabrication precision beyond what was feasible at that time. In 1840 Augusta (first programmer) suggested binary storage.

In 1887 an American statistician Herman Hollerith constructed a tabulating machine to compute the statics of 1890 US census. He used the punch cards to store data. This machine can read 200 punched cards per minutes. In 1900 Johan Amberose Fleming invents the vacuum tube to store data and instruction, which was very big. The major step in the evolution of computer system is invention of punch card which was first used during the U.S similarly; Lee de Frost invented triode and Semiconductors. After his retirement in 1913 Thomas J Watson becomes president of the company which become International Business Machines Corporation in 1924.

Electronic Era/Age of Electronic Mechanical Computer

The electronic era was the time when computers were made with electronics components. Following are some of the historical keys dates and inventions in this era.

1937- John V. Atanasoff Designed the first special purpose digital electronic computer. Professor Howard Akine constructed electro-mechanical computer named Mark I, which can perform according to pre programming instructions automatically. It was based on Charles Babbage principle after 100 years of his death. Although it was very huge with 51 feet log and 8 ft height and 3 ft wide using 18000 vacuum tubes, similarly Howard Aiken modified Mark 1 and invented Mark II which used 19000 vacuum tubes.

1945- John w Mauchly and Presper Eckert built ENIAC (Electronic Numerical Integrator and Calculator) for the U.S. army. ENIAC was the first machine to use more than 2000 vacuum tubes and 18000 vacuum tube ENICA was the first high speed general purpose electronic digital computer was produce.

1946 UNIVAC (Universal Automatic Computer) was designed by Persper Eckert and John Mauchly, inventors of the ENICA. The UNIVAC was completed in 1950. It was the first commercial computer produced in the United States.

1948- Howard Aiken developed the Harvard Mark III electronic computer with 5000 tubes. The Harvard Mark III, also known as ADEC (Aiken Dahlgren Electronic Calculator) was an early computer that was partly electronic and partly electronic mechanical. It was built at Harvard University under US Navy.

1952- Remington Rand bought the ERA in 1951 and combined the UNIVAC product in 1952; the UNIVAC 1101 was used to calculate the presidential election.

1950-National Bureau of Standards(NBS) introduced its standards Eastern Automatic Computer with 10000 newly germanium diodes in its logic circuits, and the first magnetic disk drive designed by Jacob Rainow.

1953-Tom Watson and IBM introduced model 604 computers, its first with transistor, which becomes the basic of the model 608, the first solid state computer for the commercial market.

1964- IBM produce SABRE, the first airline reservation tracing system for American airlines, IBM announce system 360 all purpose computer using for 8 bit character word length.

1968- DEC introduced the first mini computer, the PDP-8 named after the mini skirt, DEC was founded in 1975 by Kenneth H. Olsen who came for the sage project at MIT and began sales of PDP in 1960.

1969-developemtn began son ARPAnet, founded by DOD (Department of Defense)

1970 – First microprocessors and Dynamic RAMs were developed Hoff developed the first microprocessors 4004.

1971- Intel produces large Scale Integrated circuits that were used in the digital delay line, the digital audio device. Gilbert Hyatt at micro computer company introduced 4 bit 4004, a VLSI of 2300 components for Japanese company business to create a single chip for calculator. Similarly IBM introduced the first 8 inch memory disk; it was called then floppy disk.

1972- Intel made the 8 pins 8008 and 8080 microprocessors; Gary Kildall wrote his control program/ microprocessor disk operating system to provide instructions for floppy disk drivers to work with 8080 processors.

1973- IBM developed the first true sealed hard disk drive called Winchester after the rifle company, using two 30 mb plates. Robert Metcalfe at Xerox Company created Ethernet as the basic for local area network.

1975-Bill Gates and Paul Allen found Microsoft Corporation.

1976- Job and Woznik developed the Apple personal computer; Alan Shugart introduced 5.25” floppy disk.

1980- IBM signed a contract with Microsoft Company of Bill Gates and Paul Allen and Steve Ballmer to supply an operating system for IBM PC model.

1984- Apple computer introduced the Macintosh personal computer in January 24.

1985 Microsoft developed Windows 85, was the first window.

1991- World Wide Web (WWW) was developed by Tim Berner Lee and released by CERN.

1993- The first web browser called Mosaic was crated by student Marc Andresen and programmer Eric Bina at NCSA in the first 3 months of 1993. The beta version of 0.5 of X UNIX was released in Jan 23 1993.

1994- Netscape Navigator 1.0 was released DEC 1994, and given a way free soon gaining 75% world market.

1996 Intel corporation introduces pro(X 86) microprocessors

1997- Intel corporation produced Pentium II

1999- Intel Corporation produced Pentium III

2000- Intel corporation produced Pentium IV

History of Computer in Nepal

· In 2018 BS an electronic calculator called “FacIt” was used for census.

· In 2028 BS, Census, IBM 1401 a second generation mainframe computer was used.

· In 2031 BS a center for Electronic Data Processing, later renamed to National computer Center (NCC), was established fro national data processing and computer training.

· In 2038 BS ICL 2950/10 a second generation mainframe computer was used for the census.

Generations of Computers

In 1962 scientists decided to classify computer into different classes according to the devices technology and system architecture. The history of computer development is often referred to in reference to the different generations of computing devices. A generation refers to the state of improvement in the product development process. This term is also used in the different advancements of new computer technology. With each new generation, the circuitry has gotten smaller and more advanced than the previous generation before it. As a result of the miniaturization, speed, power, and computer memory has proportionally increased. New discoveries are constantly being developed that affect the way we live, work and play.

Each generation of computers is characterized by major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, and more powerful and more efficient and reliable devices. Read about each generation and the developments that led to the current devices that we use today.

First Generation - 1940-1956: Vacuum Tubes

The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. A magnetic drum, also referred to as drum, is a metal cylinder coated with magnetic iron-oxide material on which data and programs can be stored. Magnetic drums were once use das a primary storage device but have since been implemented as auxiliary storage devices.

The tracks on a magnetic drum are assigned to channels located around the circumference of the drum, forming adjacent circular bands that wind around the drum. A single drum can have up to 200 tracks. As the drum rotates at a speed of up to 3,000 rpm, the device's read/write heads deposit magnetized spots on the drum during the write operation and sense these spots during a read operation. This action is similar to that of a magnetic tape or disk drive.

They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language to perform operations, and they could only solve one problem at a time. Machine languages are the only languages understood by computers. While easily understood by computers, machine languages are almost impossible for humans to use because they consist entirely of numbers. Computer Programmers, therefore, use either high level programming languages or an assembly language programming. An assembly language contains the same instructions as a machine language, but the instructions and variables have names instead of being just numbers.

Programs written in  high level programming languages retranslated into assembly language or machine language by a compiler. Assembly language program retranslated into machine language by a program called an assembler (assembly language compiler).

Every CPU has its own unique machine language. Programs must be rewritten or recompiled, therefore, to run on different types of computers. Input was based on punch card and paper tapes, and output was displayed on printouts.

The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.

Acronym for Electronic Numerical Integrator and Computer, the world's first operational electronic digital computer, developed by Army Ordnance to compute World War II ballistic firing tables. The ENIAC, weighing 30 tons, using 200 kilowatts of electric power and consisting of 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors, was completed in 1945. In addition to ballistics, the ENIAC's field of application included weather prediction, atomic-energy calculations, cosmic-ray studies, thermal ignition, random-number studies, wind-tunnel design, and other scientific uses. The ENIAC soon became obsolete as the need arose for faster computing speeds.

Some Characteristics:

· Very large in size and slower than other generation.

· Thousand of vacuum tubes were used in a single computer. So they produce large amount of heat and prone to frequent hardware failure.

· Punch cards used as secondary storage.

· Machine level programming used.

· Cost was very high and not available for commercial use.

· Computing time is milliseconds.

Second Generation - 1956-1963: Transistors

Transistors replaced vacuum tubes in the second generation computer. Transistor is a device composed of semiconductor material that amplifies a signal or opens or closes a circuit. Invented in 1947 at Bell Labs, transistors have become the key ingredient of all digital circuits, including computers. Today's latest microprocessor contains tens of millions of microscopic transistors. Prior to the invention of transistors, digital circuits were composed of vacuum tubes, which had many disadvantages. They were much larger, required more energy, dissipated more heat, and were more prone to failures. It's safe to say that without the invention of transistors, computing as we know it today would not be possible.

The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube, allowing computers to become smaller, faster, cheaper, more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.

Second-generation computers moved from binary machine language to symbolic, or assembly, languages, which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology. The first computers of this generation were developed for the atomic energy industry.

Characteristics

· Transistor were smaller faster and higher reliable compared to tubes. Transistor can do the task of 1000 tubes. They occupied less space and were ten times cheaper than those using tubes.

· These transistors had no filament, so they did not generate heat. That is they required less electricity less electricity and emitted less heat than vacuum tubes.

· Magnetic cores were developed for primary storage and magnetic tapes and magnetic disk for secondary storage.

· Second generation compeers replaced machine language with assembly language. COBAL (common Business oriented Language) and FORTAN (formula translation) are in common use during this time.

· The operating speed was increased up to the microseconds range.

Third Generation - 1964-1971: Integrated Circuits

The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.

A nonmetallic chemical element in the carbon family of elements. Silicon - atomic symbol "Si" - is the second most abundant element in the earth's crust, surpassed only by oxygen. Silicon does not occur uncombined in nature. Sand and almost all rocks contain silicon combined with oxygen, forming silica. When silicon combines with other elements, such as iron, aluminum or potassium, a silicate is formed. Compounds of silicon also occur in the atmosphere, natural waters, and many plants and in the bodies of some animals.

Silicon is the basic material used to make computer chips, transistors, silicon diodes and other electronic circuits and switching devices because its atomic structure makes the element an ideal semiconductor. Silicon is commonly doped, or mixed, with other elements, such as boron, phosphorous and arsenic, to alter its conductive properties.

A chip is a small piece of semi conducting material (usually silicon) on which an integrated circuit is embedded. A typical chip is less than ¼-square inches and can contain millions of electronic components (transistors). Computers consist of many chips placed on electronic boards called printed circuit boards. There are different types of chips. For example, CPU chips (also called microprocessors) contain an entire processing unit, whereas memory chips contain blank memory.

Semiconductor is a material that is neither a good conductor of electricity (like copper) nor a good insulator (like rubber). The most common semiconductor materials are silicon and germanium. These materials are then doped to create an excess or lack of electrons.

Computer chips, both for CPU and memory, are composed of semiconductor materials. Semiconductors make it possible to miniaturize electronic components, such as transistors. Not only does miniaturization mean that the components take up less space, it also means that they are faster and require less energy.

Instead of punched cards and printouts, users interacted with third generation computers through keyboards and monitors and interfaced with an operating system, which allowed the device to run many different applications at one time with a central program that monitored the memory. Computers for the first time became accessible to a mass audience because they were smaller and cheaper than their predecessors.

Characteristics

· Using ICs proved to be highly reliable, relatively inexpensive and faster.

· Less human labor was required at assembly stage.

· Computers become portable. They were smaller in size but had high memory.

· The computer used programming language such as Pascal and Fortan.

Fourth Generation - 1971-Present: Microprocessors

The microprocessor brought the fourth generation of computers, as thousands of integrated circuits we rebuilt onto a single silicon chip. A silicon chip that contains a CPU. In the world of personal computers, the terms microprocessor and CPU are used interchangeably. At the heart of all personal computers and most workstations sits a microprocessor. Microprocessors also control the logic of almost all digital devices, from clock radios to fuel-injection systems for automobiles.

Three basic characteristics differentiate microprocessors:

· Instruction Set: The set of instructions that the microprocessor can execute.

· Bandwidth: The number of bits processed in a single instruction.

· Clock Speed: Given in megahertz (MHz), the clock speed determines how many instructions per second the processor can execute.

In both cases, the higher the value, the more powerful the CPU. For example, a 32-bit microprocessor that runs at 50MHz is more powerful than a 16-bitmicroprocessor that runs at 25MHz.

What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004chip, developed in 1971, located all the components of the computer - from the central processing unit and memory to input/output controls - on a single chip.

Abbreviation of central processing unit, and pronounced as separate letters. The CPU is the brains of the computer. Sometimes referred to simply as the processor or central processor, the CPU is where most calculations take place. In terms of computing power, the CPU is the most important element of a computer system.

On large machines, CPUs require one or more printed circuit boards. On personal computers and small workstations, the CPU is housed in a single chip called a microprocessor.

Two typical components of a CPU are:

· The arithmetic logic unit (ALU), which performs arithmetic and logical operations.

· The control unit, which extracts instructions from memory and decodes and executes them, calling on the ALU when necessary.

In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.

As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUI's, the mouse and handheld devices

Characteristics

· Highly accurate and totally reliable.

· Operation speed increased beyond Picoseconds and MIPS (million of instruction per second).

· These chips reduced the physical size of computer and increased their power.

· Magnetic and optical storages devices.

Fifth Generation - Present and Beyond: Artificial Intelligence

Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today.

Artificial Intelligence is the branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial intelligence includes:

· Games Playing: programming computers to play games such as chess and checkers

· Expert Systems: programming computers to make decisions in real-life situations (for example, some expert systems help doctors diagnose diseases based on symptoms)

· Natural Language: programming computers to understand natural human languages

· Neural Networks: Systems that simulate intelligence by attempting to reproduce the types of physical connections that occur in animal brains

· Robotics: programming computers to see and hear and react to other sensory stimuli

Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human behavior). The greatest advances have occurred in the field of games playing. The best computer chess programs are now capable of beating humans. In May, 1997, an IBM super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match.

In the area of robotics, computers are now widely used in assembly plants, but they are capable only of very limited tasks. Robots have great difficulty identifying objects based on appearance or feel, and they still move and handle objects clumsily.

Natural-language processing offers the greatest potential rewards because it would allow people to interact with computers without needing any specialized knowledge. You could simply walk up to a computer and talk to it. Unfortunately, programming computers to understand natural languages has proved to be more difficult than originally thought. Some rudimentary translation systems that translate from one human language to another are in existence, but they are not nearly as good as human translators.

There are also voice recognition systems that can convert spoken sounds into written words, but they do not understand what they are writing; they simply take dictation. Even these systems are quite limited -- you must speak slowly and distinctly.

In the early 1980s, expert systems were believed to represent the future of artificial intelligence and of computers in general. To date, however, they have not lived up to expectations. Many expert systems help human experts in such fields as medicine and engineering, but they are very expensive to produce and are helpful only in special situations.

Today, the hottest area of artificial intelligence is neural networks, which are proving successful in an umber of disciplines such as voice recognition and natural-language processing.

There are several programming languages that are known as AI languages because they are used almost exclusively for AI applications. The two most common are LISP and Prolog.

Characteristics

· They will be able to understand natural language, speak command, capacity to see their surrounding and will think power called Artificial Intelligence (AI).

· In contrast to present DIPS/ LIPS (Data/ logic Information processing System), the 5th generation will have KIPS (knowledge Information Processing System).

· Will support parallel processing in full fledge

In the beginning ...

        A generation refers to the state of improvement in the development of a product.  This term is also used in the different advancements of computer technology.  With each new generation, the circuitry has gotten smaller and more advanced than the previous generation before it.  As a result of the miniaturization, speed, power, and memory of computers have proportionally increased.  New discoveries are constantly being developed that affect the way we live, work and play.

The First Generation:  1946-1958 (The Vacuum Tube Years)

  The first generation computers were huge, slow, expensive, and often undependable.  In 1946two Americans, Presper Eckert, and John Mauchly built the ENIAC electronic computer which used vacuum tubes instead of the mechanical switches of the Mark I.  The ENIAC used thousands of vacuum tubes, which took up a lot of space and gave off a great deal of heat just like light bulbs do.  The ENIAC led to other vacuum tube type computers like the EDVAC (Electronic Discrete Variable Automatic Computer) and the UNIVAC I (UNIVersal Automatic Computer).

        The vacuum tube was an extremely important step in the advancement of computers.  Vacuum tubes were invented the same time the light bulb was invented by Thomas Edison and worked very similar to light bulbs.  It's purpose was to act like an amplifier and a switch.  Without any moving parts, vacuum tubes could take very weak signals and make the signal stronger (amplify it).  Vacuum tubes could also stop and start the flow of electricity instantly (switch).  These two properties made the ENIAC computer possible.

        The ENIAC gave off so much heat that they had to be cooled by gigantic air conditioners.  However even with these huge coolers, vacuum tubes still overheated regularly.  It was time for something new. The Second Generation:  1959-1964 (The Era of the Transistor)

        The transistor computer did not last as long as the vacuum tube computer lasted, but it was no less important in the advancement of computer technology.  In 1947 three scientists, John Bardeen, William Shockley, and Walter Brattain working at AT&T's Bell Labs invented what would replace the vacuum tube forever.  This invention was the transistor which functions like a vacuum tube in that it can be used to relay and switch electronic signals.

        There were obvious differences between the transistor and the vacuum tube.  The transistor was faster, more reliable, smaller, and much cheaper to build than a vacuum tube.  One transistor replaced the equivalent of 40 vacuum tubes.  These transistors were made of solid material, some of which is silicon, an abundant element (second only to oxygen) found in beach sand and glass.  Therefore they were very cheap to produce.  Transistors were found to conduct electricity faster and better than vacuum tubes.  They were also much smaller and gave off virtually no heat compared to vacuum tubes.  Their use marked a new beginning for the computer.  Without this invention, space travel in the 1960's would not have been possible.  However, a new invention would even further advance our ability to use computers.

 The Third Generation:  1965-1970 (Integrated Circuits - Miniaturizing the Computer)

       Transistors were a tremendous breakthrough in advancing the computer.  However no one could predict that thousands even now millions of transistors (circuits) could be compacted in such a small space.  The integrated circuit, or as it is sometimes referred to as semiconductor chip, packs a huge number of transistors onto a single wafer of silicon. Robert Noyce of Fairchild Corporation and Jack Kilby of Texas Instruments independently discovered the amazing attributes of integrated circuits.  Placing such large numbers of transistors on a single chip vastly increased the power of a single computer and lowered its cost considerably.

        Since the invention of integrated circuits, the number of transistors that can be placed on a single chip has doubled every two years, shrinking both the size and cost of computers even further and further enhancing its power.  Most electronic devices today use some form of integrated circuits placed on printed circuit boards-- thin pieces of bakelite or fiberglass that have electrical connections etched onto them -- sometimes called a mother board.         These third generation computers could carry out instructions in billionths of a second.  The size of these machines dropped to the size of small file cabinets. Yet, the single biggest advancement in the computer era was yet to be discovered. The Fourth Generation:  1971-Today (The Microprocessor)         This generation can be characterized by both the jump to monolithic integrated circuits(millions of transistors put onto one integrated circuit chip) and the invention of the microprocessor (a single chip that could do all the processing of a full-scale computer).  By putting millions of transistors onto one single chip more calculation and faster speeds could be reached by computers.  Because electricity travels about a foot in a billionth of a second, the smaller the distance the greater the speed of computers.

        However what really triggered the tremendous growth of computers and its significant impact on our lives is the invention of the microprocessor.  Ted Hoff, employed by Intel (Robert Noyce's new company) invented a chip the size of a pencil eraser that could do all the computing and logic work of a computer.  The microprocessor was made to be used in calculators, not computers.  It led, however, to the invention of personal computers, or microcomputers.

        It wasn't until the 1970's that people began buying computer for personal use.  One of the earliest personal computers was the Altair 8800 computer kit.  In 1975 you could purchase this kit and put it together to make your own personal computer.  In 1977 the Apple II was sold to the public and in 1981 IBM entered the PC (personal computer) market.

        Today we have all heard of Intel and its Pentium® Processors and now we know how it all got started.  The computers of the next generation will have millions upon millions of transistors on one chip and will perform over a billion calculations in a single second.  There is no end in sight for the computer movement. 

Classification of Computer

On the basic of

Size

Micro Computer

Mini Computer

Mainframe Computer

Super Computer

Work

Analogue Computer

Digital Computer

Hybrid Computer

Brand

IBM Computer(Apple/Macintosh)

IBM PC

Model

XT Computer

AT Computer

PS2 Computer

Operation

Server

Client

(SupercomputersMinicomputersMainframesWorkstationsPersonal ComputersLeast powerfulMost powerful)Computer Sizes and Power

Computers can be generally classified by size and power as follows, though there is considerable overlap:

Personal Computer: A small, single-user computer based on a microprocessor.

Workstation: A powerful, single-user computer. A workstation is like a personal computer, but it has a more powerful microprocessor and, in general, a higher-quality monitor.

Minicomputer: A multi-user computer capable of supporting up to hundreds of users simultaneously.

Mainframe: A powerful multi-user computer capable of supporting many hundreds or thousands of users simultaneously.

Supercomputer: An extremely fast computer that can perform hundreds of millions of instructions per second.

Supercomputer

The highly calculation-intensive tasks can be effectively performed by means of supercomputers. Quantum physics, mechanics, weather forecasting, molecular theory are best studied by means of supercomputers. Their ability of parallel processing and their well-designed memory hierarchy give the supercomputers, large transaction processing powers.

Supercomputer is a broad term for one of the fastest computers currently available. Supercomputers are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations (number crunching). For example, weather forecasting requires a supercomputer. Other uses of supercomputers scientific simulations, (animated) graphics, fluid dynamic calculations, nuclear energy research, electronic design, and analysis of geological data (e.g. in petrochemical prospecting). Perhaps the best known supercomputer manufacturer is Cray Research.

· Super computer are the most powerful and fastest computers among digital computers.

· These computers are capable of handling huge amounts of calculations that are beyond human capabilities. They can perform billions of instructions per second (BIPS).

· Super computers have the computing capability equal o 40,000 microcomputers.

· A Japanese supercomputer has calculated the value of PI (π) to 16 million decimal places.

· These computers costs in 15 to 20 millions dollar range (most expensive).

· They are mostly used in temperature forecast and scientific calculations.

· Examples: CRAY X-MP/24, NEC-500, PARAM, ANURAG. Among them PARAM and ANURAG are super computer s produced by Indian are exported in European countries.

· These were some of the different types of computers available today. Looking at the rate of the advancement in technology, we can definitely look forward to many more types of computers in the near future.

·

· The Columbia Supercomputer - once one of the fastest.

· Supercomputers are fast because they're really many computers working together.

· Supercomputers were introduced in the 1960's as the worlds most advanced computer. These computers were used for intense calculations such as weather forecasting and quantum physics. Today, supercomputers are one of a kind, fast, and very advanced. The term supercomputer is always evolving where tomorrow's normal computers are today's supercomputer. As of November 2008, the fastest supercomputer is the IBM Roadrunner. It has a theoretical processing peak of 1.71 pet flops and has currently peaked at 1.456 pet flops.

Mainframe

Mainframe was a term originally referring to the cabinet containing the central processor unit or "main frame" of a room-filling Stone Age batch machine. After the emergence of smaller "minicomputer" designs in the early 1970s, the traditional big iron machines were described as "mainframe computers" and eventually just as mainframes. Nowadays a Mainframe is a very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently. In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe. The distinction between small mainframes and minicomputers is vague, depending really on how the manufacturer wants to market its machines.

· Mainframe computers are very large and powerful

· It is general purpose computer designed for large scale data processing

· Very large sixe with approximate an area of 10000 sq.ft.

· It supports large no of terminals.

· They are suitable for large offices like bank, hospitals.

· They can be used in networking systems

· Some popular systems are IBM 1401, ICL 2950/10, ICL 39, and CYBER 170.

Mainframe computer

Mainframes are computers where all the processing is done centrally, and the user terminals are called "dumb terminals" since they only input and output (and do not process).

Mainframes are computers used mainly by large organizations for critical applications, typically bulk data processing such as census. Examples: banks, airlines, insurance companies, and colleges.

Minicomputer/Workstation

It is a midsize computer. In the past decade, the distinction between large minicomputers and small mainframes has blurred, however, as has the distinction between small minicomputers and workstations. But in general, a minicomputer is a multiprocessing system capable of supporting from up to 200 users simultaneously.

· Minis are smaller than Mainframe computers.

· They are medium sized computers.

· They can support 50 terminals.

· They require area of 100 sq.ft.

· These computers are useful for small business industries and university.

· Examples: Prime 9755, VAX 7500, HCL, MAGNUM etc.

· Workstations are high-end, expensive computers that are made for more complex procedures and are intended for one user at a time. Some of the complex procedures consist of science, math and engineering calculations and are useful for computer design and manufacturing. Workstations are sometimes improperly named for marketing reasons. Real workstations are not usually sold in retail.

· The movie Toy Story was made on a set of Sun (Sparc) workstations

· Perhaps the first computer that might qualify as a "workstation" was the IBM 1620.

Microcomputer

It is a type of computer used for engineering applications (CAD/CAM), desktop publishing, software development, and other types of applications that require a moderate amount of computing power and relatively high quality graphics capabilities. Microcomputers generally come with a large, high-resolution graphics screen, at large amount of RAM, built-in network support, and a graphical user interface. Most microcomputers also have a mass storage device such as a disk drive, but a special type of microcomputers, called a diskless workstation, comes without a disk drive. The most common operating systems for workstations are UNIX and Windows NT. Like personal computers, most workstations are single-user computers. However, workstations are typically linked together to form a local-area network, although they can also be used as stand-alone systems.

· A computer which is based on microprocessor is called microcomputer.

· It is a small, low cast digital computer.

· It requires small space, even can place in desktop.

· They are mainly use in home offices shop stores. It can be connected to networking system.

· Eg: IBM PC Macintosh etc.

Personal computer:

It can be defined as a small, relatively inexpensive computer designed for an individual user. In price, personal computers range anywhere from a few hundred pounds to over five thousand pounds. All are based on the microprocessor technology that enables manufacturers to put an entire CPU on one chip. Businesses use personal computers for word processing, accounting, desktop publishing, and for running spreadsheet and database management applications. At home, the most popular use for personal computers is for playing games and recently for surfing the Internet. Personal computers first appeared in the late 1970s. One of the first and most popular personal computers was the Apple II, introduced in 1977 by Apple Computer. During the late 1970s and early 1980s, new models and competing operating systems seemed to appear daily. Then, in 1981, IBM entered the fray with its first personal computer, known as the IBM PC. The IBM PC quickly became the personal computer of choice, and most other personal computer manufacturers fell by the wayside. P.C. is short for personal computer or IBM PC. One of the few companies to survive IBM's onslaught was Apple Computer, which remains a major player in the personal computer marketplace. Other companies adjusted to IBM's dominance by building IBM clones, computers that were internally almost the same as the IBM PC, but that cost less. Because IBM clones used the same microprocessors as IBM PCs, they were capable of running the same software. Over the years, IBM has lost much of its influence in directing the evolution of PCs. Therefore after the release of the first PC by IBM the term PC increasingly came to mean IBM or IBM-compatible personal computers, to the exclusion of other types of personal computers, such as Macintoshes. In recent years, the term PC has become more and more difficult to pin down. In general, though, it applies to any personal computer based on an Intel microprocessor, or on an Intel-compatible microprocessor. For nearly every other component, including the operating system, there are several options, all of which fall under the rubric of PC

Today, the world of personal computers is basically divided between Apple Macintoshes and PCs. The principal characteristics of personal computers are that they are single-user systems and are based on microprocessors. However, although personal computers are designed as single-user systems, it is common to link them together to form a network. In terms of power, there is great variety. At the high end, the distinction between personal computers and workstations has faded. High-end models of the Macintosh and PC offer the same computing power and graphics capability as low-end workstations by Sun Microsystems, Hewlett-Packard, and DEC.

Personal Computer Types

Actual personal computers can be generally classified by size and chassis / case. The chassis or case is the metal frame that serves as the structural support for electronic components. Every computer system requires at least one chassis to house the circuit boards and wiring. The chassis also contains slots for expansion boards. If you want to insert more boards than there are slots, you will need an expansion chassis, which provides additional slots. There are two basic flavors of chassis designs–desktop models and tower models–but there are many variations on these two basic types. Then come the portable computers that are computers small enough to carry. Portable computers include notebook and sub notebook computers, hand-held computers, palmtops, and PDAs.

Tower model

The term refers to a computer in which the power supply, motherboard, and mass storage devices are stacked on top of each other in a cabinet. This is in contrast to desktop models, in which these components are housed in a more compact box. The main advantage of tower models is that there are fewer space constraints, which makes installation of additional storage devices easier.

Desktop model

A computer designed to fit comfortably on top of a desk, typically with the monitor sitting on top of the computer. Desktop model computers are broad and low, whereas tower model computers are narrow and tall. Because of their shape, desktop model computers are generally limited to three internal mass storage devices. Desktop models designed to be very small are sometimes referred to as slim line models.

Notebook computer

An extremely lightweight personal computer. Notebook computers typically weigh less than 6 pounds and are small enough to fit easily in a briefcase. Aside from size, the principal difference between a notebook computer and a personal computer is the display screen. Notebook computers use a variety of techniques, known as flat-panel technologies, to produce a lightweight and non-bulky display screen. The quality of notebook display screens varies considerably. In terms of computing power, modern notebook computers are nearly equivalent to personal computers. They have the same CPUs, memory capacity, and disk drives. However, all this power in a small package is expensive. Notebook computers cost about twice as much as equivalent regular-sized computers. Notebook computers come with battery packs that enable you to run them without plugging them in. However, the batteries need to be recharged every few hours.

Laptop computer

A small, portable computer -- small enough that it can sit on your lap. Nowadays, laptop computers are more frequently called notebook computers.

Sub notebook computer

A portable computer that is slightly lighter and smaller than a full-sized notebook computer. Typically, sub notebook computers have a smaller keyboard and screen, but are otherwise equivalent to notebook computers.

Hand-held computer

A portable computer that is small enough to be held in one’s hand. Although extremely convenient to carry, handheld computers have not replaced notebook computers because of their small keyboards and screens. The most popular hand-held computers are those that are specifically designed to provide PIM (personal information manager) functions, such as a calendar and address book. Some manufacturers are trying to solve the small keyboard problem by replacing the keyboard with an electronic pen. However, these pen-based devices rely on handwriting recognition technologies, which are still in their infancy. Hand-held computers are also called PDAs, palmtops and pocket computers.

Palmtop

A small computer that literally fits in your palm. Compared to full-size computers, palmtops are severely limited, but they are practical for certain functions such as phone books and calendars. Palmtops that use a pen rather than a keyboard for input are often called hand-held computers or PDAs. Because of their small size, most palmtop computers do not include disk drives. However, many contain PCMCIA slots in which you can insert disk drives, modems, memory, and other devices. Palmtops are also called PDAs, hand-held computers and pocket computers.

PDA

Personal Digital Assistants (PDAs): It is a handheld computer and popularly known as a palmtop. It has a touch screen and a memory card for storage of data. PDAs can also be effectively used as portable audio players, web browsers and smart phones. Most of them can access the Internet by means of Bluetooth or Wi-Fi communication. Short for personal digital assistant, a handheld device that combines computing, telephone/fax, and networking features. A typical PDA can function as a cellular phone, fax sender, and personal organizer. Unlike portable computers, most PDAs are pen-based, using a stylus rather than a keyboard for input. This means that they also incorporate handwriting recognition features. Some PDAs can also react to voice input by using voice recognition technologies. The field of PDA was pioneered by Apple Computer, which introduced the Newton MessagePad in 1993. Shortly thereafter, several other manufacturers offered similar products. To date, PDAs have had only modest success in the marketplace, due to their high price tags and limited applications. However, many experts believe that PDAs will eventually become common gadgets.

PDAs are also called palmtops, hand-held computers and pocket computers.

On the Basic of working principle: Based on the operational principle of computers, they are categorized as analog computers, Digital computer and hybrid computers.Analog Computers: These are almost extinct today. These are different from a digital computer because an analog computer can perform several mathematical operations simultaneously. It uses continuous variables for mathematical operations and utilizes mechanical or electrical energy.

The computer which process analogue quantities (Continuous data) is called an analogue computer. For example Watch with hands is an example of analogue device.

· Analogue computer operates by measuring rather than counting.

· They are slower than digital computer.

· They are designed to compute physical forces as temperature and pressures.

· They are mostly used in engineering and scientific application.

· Analogue computers are used in hospital to measure the size of stone in kidney and mental disease diagnostics (CT scan with photos).

Digital Computer

· The computer with accepts discrete data is known as digital computer. For example digital watch is called digital because they go for one value to the nest with displaying all intermediate value. But can display only finite number.

· A binary number consisting of 0’s 1’s represents each quantity in such a computer. There is no way to represents the values in between 0 and 1. So all data that compute process must be encoded digitally, as series of zeros or ones.

· Digital computers are mostly used for general purpose.

· Digital computers are faster than analogue.

· It has large memory capacity.

· Example: IBM PC, Apple/Macintosh.

Hybrid Computers: These computers are a combination of both digital and analog computers. In this type of computers, the digital segments perform process control by conversion of analog signals to digital ones.Following are some of the other important types of hybrid computers.

It can transfer data from analogue to digital and vice-versa.

· During launching of rocket the analogue computers measures the speed of the rocket, temperature and pressure of atmosphere. Then these measurements are converted into digital signals and

· In hospital analogue devices measure the temperature and blood pressure of patient, and then these measurements are converted into digital signals and fed to the digital computer.

On the Basic of Operation

Server

Inside of a Rack unit Server

Similar to mainframes in that they serve many uses with the main difference that the users (called clients) do their own processing usually. The server processes are devoted to sharing files and managing log on rights.

A server is a central computer that contains collections of data and programs. Also called a network server, this system allows all connected users to share and store electronic data and applications. Two important types of servers are file servers and application servers.

Client Computer

These computers which are used in network always ask for request to its server for its operation is called client computer. The personal computer sometimes called as client computer.

A personal computer (PC)

PC is an abbreviation for a Personal Computer, it is also known as a Microcomputer. Its physical characteristics and low cost are appealing and useful for its users. The capabilities of a personal computer have changed greatly since the introduction of electronic computers. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single individual. The introduction of the microprocessor, a single chip with all the circuitry that formerly occupied large cabinets, lead to the proliferation of personal computers after about 1975. Early personal computers generally called microcomputers, sold often in kit form and in limited volumes and were of interest mostly to hobbyists and technicians. By the late 1970s, mass-market pre-assembled computers allowed a wider range of people to use computers, focusing more on software applications and less on development of the processor hardware. Throughout the 1970s and 1980s, home computers were developed for household use, offering some personal productivity, programming and games, while somewhat larger and more expensive systems (although still low-cost compared with minicomputers and mainframes) were aimed for office and small business use.

On the basic of Brand

IBM PC

IBM PC is a microcomputer produced by IMB Company. Dr.Herman Horierith established IBM in 1923. It is a leading the market of mainframe and PC’s. It used the processors, multimedia devices and some other hardware’s parts, developed by some other companies like Intel. But use the principal of its own. So all the computer developed by IBM Company is called IBM Computer.

IBM Compatible:

IBM compatible can use hardware amd software designed for IBM PC. The internal architecture of IBM compatible is similar to IBM PC. So they are called duplicate computers. Example Epson, Acer etc.

Apple/Macintosh

Apple Corporation was established in 1970 in USA. Its computer are called Apple/Macintosh (Mac) computer. The internal architecture of these computers is totally different form that of IBM. Therefore they need their own software.

On the basic of Model

XT computer:

XT (Extra Technology) computer are old technology computers with much slower processing spent (not more than 4.77 MHZ) Advance GUI based software like windows cannot be run in these computers. Everything was based on text based system. Serial number of processors was like 8080 and 8088, which were developed by Intel company are used. Complex calculation and large processing I/O devices were not flexible and faster. It used 4 bits processor length.

AT computer:

AT (Advanced Technology) computers are the new technology computers. They are faster in processing (more than 2 GHZ). It can run any type of software with high GUI and color. Serial number of process is 80286, 80386 and Pentium. Any type of complex and long processing can be done depending on the capacity of computers. I/O devices are interactive, flexible and faster. Word length exceeds 64 bits. Coprocessors re used to help the main processors for complex mathematics.

PS2 Computer:

Actually, those are not totally different model of computer but are refinement of AT computers. These models were built after 1990’s and mostly used in laptop computers. Rechargeable and battery operational systems with faster flexible I/O devices are some important characeteists of these computers. OS2 operating system was used at the beginning but the now day’s widows operating system is in leading

Computer software

Software is a computer program which is a sequence of instructions designed to direct a computer to perform certain task. The software enables a computer to receive input, store information, make decisions, manipulate and output data in the correct format. A program consists of instruction that tell the computer what to do, how to behave. When we buy a computer we don’t automatically get every program produced in the world. It may load operating system (like Window XP) if we want to write the text, presentation some slides, do some calculation then we must installed the office package, that is another software.

System software: The most essential for computer operation and directs inter operation of system and its hardware, services, utility, drivers and other preferences configuration files. The programs that are the past of computer system which includes assemblers, compilers, file management, system utility.

For example: windows 85, windows 89, window XP, window Red hat, Window Vista etc.

Application software: the types of software which is used for user’s specific application are called application software. IT consists of a number of programs designed to perform specific user application. Eg Word, Excel, PowerPoint, Photoshop, CorelDraw, Spss, Stata, Epiinfo etc

 Questions

Directions:  Answer each of the questions after reading the article above.  Write in complete sentences.  You must think and be creative with your answers.

1. In each of the 4 generations what was the cause for the increase of speed, power, or memory?

2. Why did the ENIAC and other computers like it give off so much heat?  (Be very specific)

3. What characteristics made the transistors better than the vacuum tube?

4. How was space travel made possible through the invention of transistors?

5. What did the microprocessor allow the computers to do? and What was the microprocessor's original purpose?

6. When was the first computer offered to the public and what was its name?

7. Intel was started by who?

8. What is monolithic integrated circuits?

9. How do you think society will be different if scientists are able to create a chip that will perform a trillion operations in a single second?

Computer Program and Programming Language

Computer program is a set of instruction that when executed, causes the computer to behave in a predetermined manner. Without program computers are unless and cannot do anything. However most people are confused that are intelligent devices but concept is wrong. Computer cannot understand human natural language like English or Nepali. To instruct a computer to perform a certain job we need language which can understand by the computer. The languages which are used to instruct the computer to do certain jobs called computer programming languages. There are many programming languages like C,C++,Pascal, Basic etc.

Number System

(Divider25(dividend)2 (quotient)-41 remainder)The distinct symbols, characters, alphabets which are used to measure the physical quantity is term as number system. The various number systems are used for encoding and decoding of data in computer. The distinguishing of the number system is of its base.

Rules for Conversion

The quotient and remainders are noted in each step.

The quotient of one stage is divided by 2,8,16 respectively at the next stage.

The process repeat up to less than its base numbers/divider.

The first digit is known as most significant digits and the right most digit is known as least significant digit.

Number conversion table

Decimal

Binary

Octal

Hex-decimal

Decimal

Positional weight (Right to left) ones, tens, hundreds, thousands

0,1,2,3,4,5,6,7,8,9

(Base of 10)

(25) 10

2

25

1

2

12

0

2

6

0

2

3

1

2

1

 

 

(11001)10

(63)10

8

63

7

 

7

 

 (77)8 

(66)10

16

66

2

 

4

 

 (42)16

Binary

Positional weight

( right to left) 1,2,4,8,16,32,…

 (111)2 

1

1

1

2

1

0

2

2

2

4

2

1

 (7)10

1,0

Base of 2

(101100)2

1

0

1

1

0

0

2

1

0

2

1

0

2

2

2

2

2

2

4*1

2*0

1*1

4*1

2*0

1*0

4

0

1

4

0

0

5

4

(54)8

Select three digits frame and convert its decimal equivalent

(101111)2

1

0

1

1

1

1

1

0

3

2

1

0

2

2

2

2

2

2

2*1

1*0

8*1

4*1

2*1

1*1

2

0

8

4

2

1

2

15(F)

(2F)16

Select 4 digits frame and convert in decimal equivalent

Octal

Positional weight (Right to left) 1,8,64,512,4096,..

(56)8

5

6

1

0

8

8

5*8

1*6

40

6

(46)10

(53)8

5

3

101

011

 (101011)2

0,1,2,3,4,5,6,7

Base of 8

144 8

1

4

4

001

100

100

0

0110

0100

6

4

(64)16

Hex-decimal

Positional weight (Right to left) 1,16,256,4098..

2B16

2

B (11)

0010

1011

1 0

16

16

32

11

(43)10

(7E)16

0111

1110

 (01111110)2

3DE16

3

D

E

11

1101

1110

111

011

110

1

7

3

6

1736 8

Make 3 digits frame then convert its decimal equivalent

0,1,2,3,4,5,6,7,8,9

,10(A),11(B),12(C),

13(D),14(E),15(F)

Base of 16

Conversion of Fractional Number

(For example: (0.10111)2 to Decimal101111*2-10*2-21*2-31*2-41*2-51*1/20*1/41*1/81*1/161*1/320.500.1250.06250.03125(0.71875)10    ) (For example: (0.635)106356*10-13*10-25*10-36*1/103*1/1001*1/10000.60.030.005(0.635)10    )Those numbers which has both integer part as well as fractional part is called real number or floating point number. The real numbers may be Positive (+ve) or Negative (–v) are used for scientific calculation it is often necessary to carry out calculations with very large or may be very small numbers. It is also possible to convert fractional or decimal number system into other number system. The fractional number system is that number system that can represent closer to the original number system. It is also called as floating point number that represent decimal pattern.

(For example: (0.5A6B) 16 to Decimal.5A(10)6B(11)5*16-110*16-26*16-311*16-40.31250.0390625.001464648437.0001678(0.353195037)10) (For example: (0.563)8 to Decimal.5635*8-16*8-23*8-35*1/816*1/823*1/830.6250.93750.005859375(0.724609375)10    )

(For example: (0.8125)10 to Binary2.81251.62512.6251.2512.25.5002.501.001(0.1101)2  ) (Rules to Convert Fractional Numbers Place fractional number then multiply by its (base, 2 for binary, 8 for octal and 16 for hexadecimal)If carry comes before decimal put that number else place 0 to binary digits.Continue up to 6 places for hex and 5 places for octal if it is not finished.Take its digits form up to down (reverse than binary number.)

(For example: (0.62)10 to Hexadecimal16.629.92916.9214.7214(E)16.72115.211(B)16.528.32816.325.12516.121.921(0.9EB851)16) (For example: (0.96)8 to Octal8.967.6878.685.4458.443.5238.524.1648.161.281(0.75341)8  ) (For example :( 0.635)10 to Binary2.6351.2712.27.5402.54.1.0812.08.1602.16.3202.32.6402.641.281(0.1010001)2  )

Convert the fractional binary number (1101.1010)2 into decimal

1101

 

.1010

1

1

0

1

1

0

1

0

1*23

1*22

0*21

1*20

1*2-1

0*2-2

1*2-3

0*2-4

8

4

0

1

0.5

0

0.125

0

13

0.625

(13.625)10

Convert the decimal real number (12.625)10 into binary real number

12

 

.625

1

1

0

0

2

.625

1.25

1

1100

2

.25

.50

0

2

.50

1.00

1

101

(1100.101)2

Convert real hexadecimal number (6D.3A) to its equivalent binary number

6D (13).3A (10)

(0110)(1101). (0011)(1010)

(1101101.00111010)2

Convert hexadecimal number 3DE to its equivalent octal number

(3DE) 16

3D (13)E (14)

(0011)(1101)(1110)

To obtain octal equivalent

(001)(111)(011)(110) (1736)8

Convert the real hexadecimal number 5B.3A to its equivalent octal

5B (11).3A (10)

(0101)(1011). (0011)(1010)

(01011011.00111010)2

(01)(011)(011).(001)(110)(100)

(133.164)8

Convert the real octal number 46.57 to its equivalent hexadecimal

46.57

(100)(110). (101)(111)

(100110.101111)2

(0010)(0110). (1011)(1100)

(26.BC)16

Addition of Binary Numbers

In the binary number system when 1 is added to 1 the sum is zero with a carry 1. If the sum is written up to 2 bits, it is equal to 10 (2 decimal).

(ABA+BABA-B000000011101101110111(Carry)0011(Borrow)1)

Examples:

(141110-501019100191001+50101141110101010+1311012310111131101-7011160110)

(70111-81000-11111)

The Use of complements to represents negative number

We know most of today’s computer works on binary number system, (base of 2 0/1).The computer performs subtraction using complemented number. This is very economic to do Arithmetic and logical operation is done in the same unit. To represent negative numbers in binary number, we use 2’s complements.

9’s Complement:

The decimal number representation of 9’s complements is calculated by the subtraction from 9’s of each digit.

Example: 37 in decimal can be represents (99-37) = 62 (9’s complement of decimal number 37).

Similarly 234 in decimal can be represents (999-234) =765(9’s complement of decimal number 234).

10’s Complement:

The 10’s complement of decimal number is equal to 9’s complement and Plus 1.

Example: 37= (99-37) = 62+1=63 so 37 decimal number can be represented in 10’s decimal is 63.

37+63=1 00(if we ignore carry 1 so it become o) so we can conclude that the sum of decimal number and its 10’s complement is zero.

Addition of 10’s complements

Add 86 and (-21)

9’s complement of (-21) = 99-21=78 then 10’s complement=79

86+79= 1 65 then carry 1 is ignored we get 65

Add 59 and (-84)

9’s complement of (-84) is 99-84=15 then 10’s complement of (-84) is 16

59+16=75 (if you want to checked 99-75=24+1=25)//59-84=25

Add (-26) and (-43)

9’s complement of (-26) =99-26=73 then 10’s complement of 10=73+1=74

9’s complement of (-43) =99-43=56 then 10’s complement of 10’s=56+1=57

74+57=1 31(if we ignore 1 carry then 31 but its 10’s complement is99-31=68+1=69)

Add 34 and 58

34+58=92 just add two numbers (there is no carry and less than 100 sum is correct in decimal number)

1’s Complement

One’s complement in binary number is similar to 9’s complement in decimal number. To obtain its 1’s complement of binary number we just shift its bits in reversed (0-1/1-0). Example:

1110=0001, 01101=10010

2’s Complement:

2’s complement in the binary number system is similar to 10’s complement. 2’s complement =one’s complement +1

Find 2’s complement of 101100 (reversed its bit then we get 1’s complement then add 1 to its to get 2’s complement)

Example: 101100=010011+1=010100

Find 2’s complement of 111 =000+1=001

Add binary no 1100 and its 2’s complement

1100=0011(1’s complement)+1(to get 2’s)=0100

1100+0100= 10000(if we ignore 1 carry it become again zero) the sum of binary number of 2’s complement is zero.

Add binary no 1011 and its 2’s complement

1011=0100+1=0101

1011+0101=10000(if we ignore carry 1)

Subtraction using 2’s complement

The addition of 2’s complement of a number is equivalent to its subtraction. This will be clear form the following example:

Subtract 2 from 6

6(0110) and 2(0010) =1101+1 =1110(2’s complement)

0110+1110=10100(if we ignore 1 then final number become 4)

Subtract 3 from 5

3(0011) 2’s=1100+1=1101

0101+1101=1 0010(if we ignore carry 1) so 10 become 2 in binary

Representation of Sign and Unsigned numbers

In decimal number we use positive or negative to represents its quantity in (+ve/-ve) but in binary number to represent positive number 0 is taken a head(that implies positive number) similarly negative number that takes 1 a head to indicate negative number.

Example 9=(01001) and -9=(11001)

Add (+5) and (+3)

0 0101+0 0011=0 1000(the leading 0 indicate +ve number ie 8 )

Add 9 and (-4)

(-4= 1’s (0100= 1011 then 2’s complement is 1 1100)

0 1001+1 1100=1 00101(+5)

Add (-9) and 3

-9=0110(binary) 1001(1’s) +1=0111

1 0111+0 0011=1 1010 ( ie 2’s complement of -6 (0101+1=-0110))

Add(-12) and (-2)

(12=1 1100 and -2= 1 0010 in binary)

1’s complement 1 0011 and 1 1101

2’s complement 1 0100 and 1 1110

1 0100+1 1110=1 10010(1’s 01101+1=1 1110 =-14)

Calculation of Binary Number

· Subtract (10001)2 from (101100)2

101100

Minuend

-10001

Subtrahend

(Ans) 011011

Difference

11001

100011

-100011

-11001

001010

(Ans) -1010

· Subtract (100011)2 from (11001)2

· Subtract (11001)2 from (100011)2

100011

-11001

(Ans) 0010010

· Subtract (11101)2 from (10001)2 by using 2’s complement

10001

+00011

10100

So the answer become

-( 01011+1)

-01100

· Subtract (11101)2 from (10001)2 by using 2’s complement

10001

+00011

10100

So the answer become

-( 01011+1)

-01100

· Add (101.011)2 and (11.110)2

· To subtract larger from smaller

· Make 2’s complement of larger no then add these number

· If here is no carry then the answer is –ve with 2’s complement

101.011

+11.110

1001.001

Adding starts form right side and take carry overflow to the real part if decimal part produce overflow

· Add (11001.1011)2 and (10011.0110)2

1100.1011

+10011.0110

(Ans) 101101.0001

· Add (101.011)2 and (11.110)2

101.011

+11.110

(Ans) 1001.001

· Add (1011.1010)2 and (1000.011)2

1011.1010

1000.0110

(Ans) 10100.0000

0110

+1101

-10011

+1

(Ans) 100

· Subtract (0010)2 from (0110)2 by using 2’s complement

· Subtract the following

101.101

-11.011

1100.01

-1001.11

1011.1

-100.11

-11.011

(- 011.011)

1100.01

-1001.11

(1011.10)

-100.11

(Ans) 010.01

(Ans) 10.10

(Ans) 110.11

Subtract 101-0.11

101.00

-000.01

Ans: 100.01

· Subtract 0.11-0.101

(0.11)

0.110

+0.011

(-ve, ignored) 1.001

1’2’s complement:0.110

Subtract 101- 0.11

(101) 101.00

(0.11) - 000.11

(+ve) 100.11

· Subtract 0.11- 0.101 /same question can be done in another way

(0.11) 0.110

(0.101) -1.101

(-ve) 0.001

0 .110+1

(ans) 0.110

By using 2’s 0.101 become 1.010+1=1.111

1.111

+1.011

Carry ignored 10.110

+1

Again converting its bits 0.110

Solved question form B. Ram’s Books

· Q.3 Convert the following binary numbers to equivalent decimal 11010

1

1

0

1

0

24

23

22

21

20

16

8

0

2

0

16

8

0

2

0

 26 

· Q.4 Convert the following decimal numbers to equivalent Binary 19

2

19

1

2

9

1

2

4

0

2

2

0

1

10011 

· Q.5 Convert the following binary fraction to decimal fraction 0.1011

1

0

1

1

2-1

2-2

2-3

2-4

0.5

0

0.125

0.0625

0.6875

· Q.6 Convert the following real binary number to equivalent decimal 1001.101

1

0

0

1

0.1

0

1

2-1

2-2

2-3

 

0.5

0

0.125

 9

0.625

· Q.7 Convert the following real decimal number to equivalent binary 17.71875

 

 

 

17

2

0.71875

1.4375

1

1

0

0

1

2

0.4375

0.875

0

 

 

 

 

2

0.875

1.75

1

 

 

 

 

2

0.75

1.5

1

 

 

 

 

2

0.5

1

1

1001

 

0.10111

· Q.8 Convert the following addition 1100+1001

 1

 1

 0

0

1

0

0

1

10

1

0

1

10101

· Q.9 Convert the following addition 101.011+11.110

1

0

1

.0

1

1

 +

1

1

.1

1

0

10

0

1

.0

0

1

1001.001

· Q.10 Perform the following subtraction 1101-1001

1

1

0

1

-1

0

0

1

0

1

0

0

100

· Q.11 Perform the following subtraction 101.101-11.011

1

0

1

.1

0

1

 -

1

1

.0

1

1

 

 1

 0

 .0

 1

0

· Q.16 Perform the following subtraction using 2’s complements 1101-1001

1101

1

1

0

1

-1001

-1

0

0

1

In this case checked which number is greater if minuend is greater than subtend just subtract

0

1

0

0

101-111

1

0

1

111=000+1=001

+0

0

1

Subtend is grater so do complements then add

1

1

0

101- 0.11

1

0

1

.0

0

 In this case minuend is greater so directly subtract it

 

 

 

.1

1

1

0

0

.0

1

Boolean algebra and logic gate

Boolean algebra is algebras of logic, it is one of the most basic tools to analyze and design of electronic circuits. The original purpose of this algebra was to simplify logical statements and solve logical problems. Boolean algebra was invented by George Boole an English mathematician in 1854. In the past his idea was used to design algebra calculation but later his idea was used by Shannon to solved telephone switching circuits. So those ideas were highly used in electronic circuits design in computer sciences.

Boolean logic provides the fundamental background for computation in modern binary computer systems. You can represent any algorithm, or any electronic computer circuit, using a system of Boolean equations.

Now consider the statement: x= Ram is tall boy in the class.

This statement may have two possible values, either true or false, this statement is remain true is one case similarly, this statement may be false. The true exists only when nobody overcomes Ram's height. If Hair is taller than Ram the first statement become false or zero. Therefore each and every statement has two possible values.

Similarly some grammar teacher are argue that there is not present tense because whatever the tasks had been finished was past action and the remaining tasks are to be done in future will be the future tense. So there are only two states in the tense (past and future).

Boolean algebra is a logical calculation of truth values, It resembles the algebra of real numbers, but with the numeric operations of multiplication by, addition x + y, and negation −x replaced by the respective logical operations of conjunction x∧y, disjunction x∨y, and complement ¬x. The Boolean operations are these and all other operations that can be built from these, such as x∧(y∨z). These turn out to coincide with the set of all operations on the set {0,1} that take only finitely many arguments; there are 22n such operations when there are n arguments.

Basic operations

The binary computing system is based on algebraic system operations. Whereas elementary algebra is based on numeric operations multiplication xy, addition x + y, and negation −x, Boolean algebra is customarily based on logical counterparts to those operations, namely conjunction x∧y (AND)(A.B), disjunction x∨y (OR)(A+B), and complement or negation ¬x (NOT)(A=−A). In electronics, the AND is represented as a multiplication, the OR is represented as an addition, and the NOT is represented with an overbar: x ∧ y and x ∨ y, therefore, become xy, x + y and x=−x simultaneously.

Conjunction is the closest of these three to its numerical counterpart, in fact on 0 and 1 it is multiplication. As a logical operation the conjunction of two propositions is true when both propositions are true, and otherwise is false. The first column of Figure below tabulates the values of x∧y for the four possible valuations for x and y; such a tabulation is traditionally called a truth table.

(XANDYX AND Y TRUEAndTRUETRUETRUEAndFALSEFALSEFALSEAndTRUEFALSEFALSEAndFALSEFALSE) (XYX.Y1.111.000.100.00)AND Operator

The logical multiplication can be defined as 1.1=1, 1.0=0, 0.1=0, 0.0=0

Similarly if we take two statements like; The man is tall=X and the man is wise=Y then X AND Y may have four possible results. A “.”,"^","U" are used to represent AND operation. So X and Y will represent as X.Y. The rules for and operation are exactly same as those of simple arithmetic multiplication. This is just coincidences which enable us to remember those rules with any efforts.

OR Operation

For example here are two statements: He will give me a pencil and He will give me a pen.

XY

X+Y

1+1

1

1+0

1

0+1

1

0+0

0

(XORYX OR YTRUEORTRUETRUETRUEORFALSETRUEFALSEORTRUETRUEFALSEORFALSEFALSE)These two statements can be written as compound statements given bellow. X OR Y, both can be written in the same statement by using OR operations it is understand that it is inclusive OR. X OR Y means X OR Y OR Both, therefore an inclusive or is simply written as given bellow. Here X may be true or false similarly Y may be true or false. The compound statement X or Y will be true when anyone or both, statements are true. The Truth table shows possibilities of OR operator:

The “+” is used to represent OR operation. So X OR Y can be written as X+Y. Representing true by 1 and false by 0 and or by + the above table can be presented as above.

NOT Operation

Ram does not have any apple this sentence in English has similar sense Ram has some apple.

The man is wise (assume it is =X). This statement may be true or false. If this statement is true the statement given after processing will be false. The man is not wise (=Not X=ˉX=X’)

(X(NOT)ˉXTRUEFALSEFALSETRUE) (XˉX1001)If the statement the man is wise is false the statement become after inversion is the man is not wise is true the truth table for NOT operation.

Logical negation however does not work like numerical negation at all. Instead it corresponds to: ¬x = x+1 mod 2. Yet it shares in common with numerical negation the property that applying it twice returns the original value: ¬¬x = x, just as −(−x) = x.

Examples of switches to illustrate logical operations in Boolean algebra

(-Battery+BULBAB)Electrical switches are very good examples t