50
Introduction Perhaps the most important challenge facing information systems is to provide users with timely and versatile access to data stored in computer files. In a dynamic business environment there are many unanticipated needs for information. Often the basic underlying data satisfy these information needs are contained in computer files but cannot be accessed and output in a suitable format on a timely basis. Database management systems have the potential to meet this challenge. Traditional Approach to Information Processing Data Redundancy Data redundancy increases data editing, maintenance, and storage costs. In addition, data stored on two master files (which should in theory be identical) are often different for good reason, but such differences inevitably create confusion. Lack of Data Integration To change a file structure (e.g.: add a new field) in a traditional programming language like COBOL Make a new file with new field old file structure Copy data from old file in to new file Delete old file Rename new file as old file’s name A B copy Old file New file (extended

Data processing in Industrial Systems course notes after week 5

Embed Size (px)

Citation preview

Page 1: Data processing in Industrial Systems course notes after week 5

IntroductionPerhaps the most important challenge facing information systems is to provide users

with timely and versatile access to data stored in computer files. In a dynamic

business environment there are many unanticipated needs for information. Often the

basic underlying data satisfy these information needs are contained in computer files

but cannot be accessed and output in a suitable format on a timely basis. Database

management systems have the potential to meet this challenge.

Traditional Approach to Information ProcessingData RedundancyData redundancy increases data editing, maintenance, and storage costs. In

addition, data stored on two master files (which should in theory be identical) are

often different for good reason, but such differences inevitably create confusion.

Lack of Data IntegrationTo change a file structure (e.g.: add a new field) in a traditional programming language like COBOL

Make a new file with new field old file structure Copy data from old file in to new file Delete old file Rename new file as old file’s name

A B

copy

Old fileNew file (extended fields)

Delete AB A

RenameA

Delete A

Page 2: Data processing in Industrial Systems course notes after week 5

Program/Data DependenceLack of Flexibility The information-retrieval capabilities of most traditional systems are limited to

predetermined requests for data. Therefore, the system produces information in the

form of scheduled reports and queries which it has been previously programmed to

handle. If management needs unanticipated data, the information can perhaps be

provided if it is in the files of the system. Extensive programming is often involved.

Thus, by the time the programming is completed, the information may no longer be

required or useful. This problem has long plagued information systems. Management

knows that a particular piece of information can be produced on a one-time basis, but

the expense and time involved are generally prohibitive. Ideally, information

processing should be able to mix related data elements from several different files

and produce information with a fast turnaround to service unanticipated requests for

information.

Name Surname Tel.No

Name Surname Tel.No Birth Day

New File

Old File

Page 3: Data processing in Industrial Systems course notes after week 5

Database Management Systems (ORACLE, SYBASE, MS-SQL, MySQL, NOSQL,

PROGRESS)

It is very important to select a proper programming tool when building a MIS.

A DBMS should be:

Effective

Fast

Easy to communicate to other DBMSs.

Cheap

Safe (reliable)

Good security functions

Upgrade utilities (an old version program working with new ones) -immigration

Independent to hardware (after preparing a program with your IBM

compatible PC, it should be working in a Unix operating system platform)

Case study: In a big Turkish company Marketing department enter customer orders to the computer

They print out the customer orders and give this report to production planning

department.

Production planning department reenter these data to their database for

planning.

Logical Data-Base Structures

Logical Data-Base Structures

Two key features of a DBMS are the ability to reduce data redundancy and the ability

to associate related data elements such as related fields and records. These

functions are accomplished through the use of keys, embedded pointers, and linked

lists.

Tree Structures

Figure illustrates student data in a tree (hierarchical) structure. The lower part of the

figure shows the data fields in each record. A tree structure consists of records

(often called segments) that are linked to related records in a one-to-many

relationship. Each record may have only one parent but an unlimited number of

Page 4: Data processing in Industrial Systems course notes after week 5

children. The top record is called the root. As shown in figure, each student can

attend many semesters and take many courses in each semester. However, each

course is tied to a single semester, and the data in each semester record are in turn

tied to a single student.

Figure: A tree structure (Example: product tree, bill of materials)

Fields in the student record

Fields in the semester record

Fields in the Course Record

Network Structures

A network structure allows many-to-many relationship among the nodes in the

structure. Figure illustrates a network structure between courses and students. Each

student can enroll in several classes, each class has many students.

The physical storage as well as data linkage in a network structure involve embedded pointers in each record as in a tree structure. There are several schemes for using pointers with network structures. One is similar to the scheme discussed under tree structures, where each course record (for example, course 1) contains the address of the first student in the course, and then the first student

CAR 307STUDENT

ENGINE-FALL ELECTRICAL SYST. SPRING

VALVES Course record ENG1020

Course record ACCT2111

Course record MSCI2011

Student Number

Student Name Addres

s

To-Date Grade-Point

Average

Date Admitte

d

Major

Semester

Semester Fees

Fees Paid

Semester Grade-Point

Average

Course Number

Course Grade

Page 5: Data processing in Industrial Systems course notes after week 5

record, in turn, contains the address of the second student in the course, and so on, thereby forming a linked list.

Relational StructuresMost business data have traditionally been organized in the form of simple tables

with only columns and rows. In a relational DBMS, these tables are called

relations. This data structure is known as the relational model, since it is based on

mathematical theory of relations. One of the greatest advantages of relational

model is its conceptual simplicity. The relational or tabular model of data is used

in a large variety of applications, ranging from your weekly shopping list to the

annual report of the world’s largest corporation. Most people are familiar with the

relational model as table. But the relational model does use some unfamiliar

terminology. What we have come to know as a file is called either a table or

relation. Each row in the table is called a tuple (rhymes with couple). A tuple is the

same as a record in regular file terminology. The columns of the table are known

as attributes and they are equivalent to fields within records. Instead of using the

formal relational terminology of relations, tuples, and attributes we will use the

more familiar files, records, and fields as do most real-world relational database

systems.

Page 6: Data processing in Industrial Systems course notes after week 5

Stock-masterStock-code Stock-name Unit price

3000 ABC AD 20

3001 XYZ AD 100

Depot-masterDepot no Depot name Depot manager

1 İSTANBUL AD

2 ANKARA UC

Stock-statusStock-code Depot no Quantity

3000 1 300

3001 2 200

3001 3 50

3001 4 10

Materials inventory weekly ReportStock Code

Stock Name

Depot No

Depot Name

Quantity Unit Price Amount

3000 ABC 1 İstanbul 300 AD 20 6,000

3001 XYZ 2 Ankara 200 AD 100 20,000

3001 XYZ 3 Yalova 50 AD 100 5,000

3001 XYZ 4 İzmit 10 AD 100 1,000

Page 7: Data processing in Industrial Systems course notes after week 5

Enterprise Resource Planning: The Business BackboneWhat is ERP?ERP is the technological backbone of e-business, an enterprise-wide transaction

framework with links into sales order processing, inventory management and control,

production and distribution planning, and finance.

Benefits and challenges of ERPERP systems can generate significant business benefits for a company. Many other

companies have found major business value in their use of ERP in several basic

ways:

Quality and efficiency: ERP creates a framework for integrating and

improving a company’s internal business processes that result in significant

improvements in the quality and efficiency of customer service, production,

and distribution.

Decreased costs: Many companies report significant reductions in

transaction processing costs and hardware, software, and IT support staff

compared to the nonintegrated legacy systems that were replaced by their

new ERP systems.

Decision support: ERP provides vital cross-functional information on

business performance quickly to managers to significantly improve their ability

to make better decisions in a timely manner across the entire business

enterprise.

Enterprise agility: Implementing ERP systems breaks down many former

departmental and functional walls or “silos” of business processes, information

systems, and information resources. This results in more flexible

organizational structures, managerial responsibilities, and work roles, and

therefore a more agile and adaptive organization and workforce that can more

easily capitalize on new business opportunities.

Causes of ERP failuresWhat have been the major causes of failure in ERP projects? In almost every case,

the business managers and IT professionals of these companies underestimated the

Page 8: Data processing in Industrial Systems course notes after week 5

complexity of the planning, development, and training that were needed to prepare

for a new ERP system that would radically change their business processes and

information systems. Failure to involve affected employees in the planning and

development phases and change management programs, or trying to do too much

too fast in the conversion process, were typical causes of failed ERP projects.

Insufficient training in the new work tasks required by the ERP system, and failure to

do enough data conversion and testing, were other causes of failure. In many cases,

ERP failures were also due to over-reliance by company or IT management on the

claims of ERP software vendors or the assistance of prestigious consulting firms

hired to lead the implementation. The following experiences of companies that did it

right give us a helpful look at what is needed for a successful ERP implementation.

ERP - 2.pptsap 1.pptTrends in ERPToday, ERP is still evolving – adapting to developments in technology and the

demands of the market. Four important trends are shaping ERP’s continuing

evolution: improvements in integration and flexibility, extensions to e-business

applications, a broader reach to new users, and the adoption of Internet

technologies.

(B2B: BUSINESS TO BUSINESS, B2C: BUSINESS TO

CUSTOMERS/CONSUMERS)

Customer Relationship Management: The Business FocusWhat is CRM?Managing the full range of the customer relationship involves two related objectives:

one, to provide the organization and all of its customer-facing employees with a

single, complete view of every customer at every touch point and across all channels;

and, two, to provide the customer with a single, complete view of the company and

its extended channels.

Major application components of a CRM system:

Contact and account management

Sales

Page 9: Data processing in Industrial Systems course notes after week 5

Marketing and fulfillment

Customer service and support

The performance of marketing & sales personnel

Retention and loyalty programs

(DATA WAREHOUSE, DATA MINING)

Salesforce.Com, Sugarcrm SUPPLY CHAIN MANAGEMENT: THE BUSINESS NETWORKWhat is SCM? Legacy supply chains are clogged with unnecessary steps and redundant stockpiles.

For instance, a typical box of breakfast cereal spends an incredible 104 days getting

from factory to supermarket, struggling its way through an unbelievable maze of

wholesalers, distributors, brokers, and consolidators, each of which has a

warehouse. The e-commerce opportunity lies in the fusing of each company’s

internal system to those of its suppliers, partners, and customers. This fusion forces

companies to better integrate inter-enterprise supply chain processes to improve

manufacturing efficiency and distribution effectiveness.

Unilever Beer Game (trick) www.masystem/beergame

book order by using amazon.com

GM & TOYOTA # OF PURCHASING DEPT. PERSONNEL

ERP DSS-8-SC-ERP.ppt

MRP : MATERIALS REQUIREMENT PLANNING

MRP II: MANUFACTURING RESOURCES PL.

Electronic Data InterchangeElectronic data interchange (EDI) was one of the earliest uses of information

technology for supply chain management. EDI involves the electronic exchange of

business transaction documents over the Internet and other networks between

supply chain trading partners (organizations and their customers and suppliers). Data

representing a variety of business transaction documents (such as purchase orders,

invoices, requests for quotations, and shipping notices) are automatically exchanged

Page 10: Data processing in Industrial Systems course notes after week 5

between computers using standard document message formats. Typically, EDI

software is used to convert a company’s own document formats into standardized

EDI formats as specified by various industry and international protocols. Thus, EDI is

an example of the almost complete automation of an e-commerce supply chain

process. And EDI over the Internet, using secure virtual private networks, is a

growing B2B e-commerce application.

Figure: The supply chain management functions and potential benefits offered by the SCM module in the mySAP e-business software suite.SCM FunctionsPlanning Supply chain design Optimize network of suppliers, plants, and distribution centersCollaborative demand and supply planning

Develop an accurate forecast of customer demand by sharing demand and supply forecasts instantaneously across multiple tiers

Internet-enable collaborative scenarios, such as collaborative planning, forecasting, and replenishment (CPFR), and vendor-managed inventory

Execution Materials management Share accurate inventory and procurement order information

Ensure materials required for production are available in the place at the right time

Reduce raw material spending, procurement costs, safety stocks, and raw material and finished goods inventory

Collaborative manufacturing

Optimize plans and schedules while considering resource, material, and dependency constraints

Collaborative fulfillment Commit to delivery dates in real time Fulfill orders from all channels on time with order management,

transportation planning, and vehicle scheduling Support the entire logistics process, including picking, packing,

shipping and delivery in foreign countriesSupply chain event management

Monitor every stage of the supply chain process, form price quotation to the moment the customer receives the product, and receive alerts when problems arise

Supply chain performance management

Report key measurements in the supply chain, such as filling rates, order cycle times, and capacity utilization

Trends in SCMThe supplier-facing applications arena will see the continued growth of public as well

as private networks that transform linear and inflexible supply chains into nonlinear

and dynamic fulfillment networks. Supplier-facing applications will also evolve along

another dimension: from automation and integration of supply chains to collaborative

sourcing, planning, and design across their supplier networks.

Page 11: Data processing in Industrial Systems course notes after week 5

Figure: Stages in the use of supply chain management

Decision Support SystemsDecision support systems are computer-based information systems that provide

interactive information support to managers and business professionals during the

decision-making process.

InformationSharing,

Product/Sales DataSourcing Help

Logistics,Order Fulfillment

Order Management,Inventory

Management,Resource AllocationSystems Use and

Integration

Collaborative Marketing,

Sales and ServiceSCM Optimization,

Collaborative Design and

Delivery

SCM Stage 1Current supply chain

improvementSupply chain, e-commerce

loosely coupled

SCM Stage 2Intranet/extranet links to

trading partnersSupplier network expansion

SCM Stage 3Collaborative planning and

fulfillmentExtranet and exchange-

based collaboration

Page 12: Data processing in Industrial Systems course notes after week 5

Decision support systems use (1) analytical models, (2) specialized databases, (3) a

decision maker’s own insights and judgments, and (4) an interactive, computer-

based modeling process to support the making of semi-structured and unstructured

business decisions. See Figure: Comparing decision support systems and

management information systems

Example: An example might help at this point. Sales managers typically rely on

management information systems to produce sales analysis reports. These reports

contain sales performance figures by product line, salesperson, sales region, and so

on. A decision support system, on the other hand, would also interactively show a

sales manager the effects performance of changes in variety of factors (such as

promotion expense and salesperson compensation). The DSS could use several

criteria (such as expected gross margin and market share) to evaluate and rank

several alternative combinations of sales performance factors

Figure: Comparing decision support systems and management information systems.

Management Information Systems

Decision Support Systems

Decision support provided Provide information about

the performance of the

organization

Provide information and

decision support

techniques to analyze

specific problems or

opportunities

Information form and

frequency

Periodic, exception,

demand, and push reports

and responses

Interactive inquiries and

responses

Information format Pre-specified, fixed format Ad hoc, flexible, and

adaptable format

Information processing

methodology

Information produced by

extraction and

manipulation of business

data

Information produced by

analytical modeling of

business data.

Page 13: Data processing in Industrial Systems course notes after week 5

Therefore, DSS are designed to be ad hoc, quick-response systems that are initiated

and controlled by business decision makers. Decision support systems are thus able

to directly support the specific types if decision and the personal decision-making

styles and needs of individual executives, managers, and business professionals.

DSS ComponentsUnlike management information systems, decision support systems rely on model bases as well as databases as vital system resources. A DSS model base is a

software component that consists of models used in computational and analytical

routines that mathematically express relationships among variables. For example, a

spreadsheet program might contain models that express simple accounting

relationships among variables such as Revenue – Expenses = Profit. Or a DSS

model base could include models and analytical techniques used to express much

more complex relationships. For example, it might contain linear programming

models, multiple regression forecasting models, and capital budgeting present value

models. Such models may be stored in the form of spreadsheet models or templates,

or statistical and mathematical programs and program modules.

DSS software packages can combine model components to create integrated models that support specific types of decisions. DSS software typically contains built-in analytical modeling routines and also enables you to build your own models. Many DSS packages are now available in microcomputer and Web-enables versions. Of course, electronic spreadsheet packages also provide some of the model building (spreadsheet models) and analytical modeling (what-if and goal-seeking analysis) offered by more powerful DSS software.

Page 14: Data processing in Industrial Systems course notes after week 5

DSS PackagesRetail: Information Advantage and Unisys offer the Category Management Solution

Suite, an OLAP (OnLine Analytical Processing) decision support system and

industry-specific data model.

Insurance: Computer Associates offers RiskAdvisor, an insurance risk decision

support system whose data model stores information in insurance industry specific

tables designed for optimal query performance.

Telecom: NCR and SABRE Decision Technologies have joined forces to create the

NCR Customer Retention program for the communications industry including data

marts for telephone companies to use for decision support in managing customer

loyalty, quality of service, network management, fraud, and marketing.

Using Decision Support SystemsUsing a decision support system involves an interactive analytical modeling process. For example, using a DSS software package for decision support may result

in a series of displays in response to alternative what-if changes entered by the

manager. This differs from the demand responses of management information

systems, since decision makers are not demanding prespecified information. Rather,

they are exploring possible alternatives. Thus, they do not have to specify their

information needs in advance. Instead, they use the DSS to find the information they

need to help them make a decision. That is the essence of the decision support

system concept.

Figure: Activities and examples of the major types of analytical modeling

Type of Analytical Modeling Activities and ExamplesWhat-if analysis Observing how changes to selected

variables affect other variables.

Example: What if we cut advertising by

10%? What would happen to sales?

Sensitivity analysis Observing how repeated changes to a

single variable affect other variables.

Page 15: Data processing in Industrial Systems course notes after week 5

Example: Let’s cut advertising by $1000

repeatedly so we can see its relationship

to sales.

Goal-seeking analysis Making repeated changes to selected

variable reaches a target value.

Example: Let’s try increases in

advertising until sales reach $1 million.

Optimization analysis Finding an optimum value for selected

variables, given certain constraints.

Example: What’s the best amount of

advertising to have, given our budget and

choice of media?

Using a decision support system involves four basic types of analytical modeling

activities: (1) what-if analysis, (2) sensitivity analysis, (3) goal-seeking analysis, and

(4) optimization analysis. Let’s briefly look at each type of analytical modeling that

can be used for decision support. See Figure: Activities and examples of the major

types of analytical modeling.

Executive Information SystemsExecutive information systems (EIS) are information systems that combine many of

the features of management information systems and decision support systems.

When they were first developed, their focus was on meeting the strategic information

needs of top management. Thus, the first goal of executive information systems was

to provide top executives with immediate easy access to information about a firm’s

critical success factors (CSFs), that is, key factors that are critical to accomplishing

an organization’s strategic objectives. For example, the executives of a retail store

chain would probably consider factors such as its e-commerce versus traditional

sales results, or its product line mix to be critical to its survival and success.

Implementing Business SystemsEvaluating hardware, software, and servicesHow do companies evaluate and select hardware, software and IT services, such as

those shown in the Figure? Large companies may require suppliers to present bids

UFUK CEBECİ, 01.11.2005,
KEY PERFORMANCE INDICATORS APPLICATION FOR GROUP OF STUDENTS, HR, R&D, SALES, PURCHASING
Page 16: Data processing in Industrial Systems course notes after week 5

and proposals based on system specifications developed during the design stage of

systems development. Minimum acceptable physical and performance characteristics

and all government agencies formalize these requirements by listing them in a

document called an RFP (request for proposal) or RFQ (request for quotation). Then

they send the RFP and RFQ to appropriate vendors, who use it as the basis for

preparing a proposed purchase agreement.

Companies may use a scoring system of evaluation when there are several

competing proposals for a hardware or software acquisition. They give each

evaluation factor a certain number of maximum possible points. Then they assign

each competing proposal points for each factor, depending on how well it meets the

user’s specifications. Scoring evaluation factors for several proposals helps organize

and document the evaluation process. It also spotlights the strengths and

weaknesses of each proposal. Another method for selecting is AHP:

http://www.ufukcebeci.com/EditModule.aspx?tabid=42&mid=44&def=News

%20Article%20View&ItemId=7

YOU PLAN TO OPEN A PIZZA RESTAURANT . DEFINE HARDWARE, SOFTWARE AND MIS & SUBMODULES NEEDED.

Figure: An example of the implementation process activities and time lines for a company installing an intranet-based employee benefits system in its human resource management department.Intranet Implementation Activities Month 1 Month 2 Month 3 Month 4Acquire and install server hardware and softwareTrain administratorsAcquire and install browser softwareAcquire and install publishing softwareTrain benefits employees on publishing softwareConvert benefits manuals and add revisionsCreate Web-based tutorials for the intranetHold rollout meetingsWhatever the claims of hardware manufacturers and software suppliers, the

performance of hardware and software must be demonstrated and evaluated.

Independent hardware and software information services (such as Datapro and

Auerbach) may be used to gain detailed specification information and evaluations.

Page 17: Data processing in Industrial Systems course notes after week 5

Other users are frequently the best source on information needed to evaluate the

claims of manufacturers and suppliers. That’s why Internet newsgroups established

to exchange information about specific software or hardware vendors and their

products have become one of the best sources for obtaining up-to-date information

about the experiences of users of the products.

Large companies frequently evaluate proposed hardware and software by requiring

the processing of special benchmark test programs and test data. Benchmarking

simulates the processing of typical jobs on several computers and evaluates their

performances. Users can then evaluate test results to determine which hardware

device or software package displayed the best performance characteristics.

Hardware evaluation factorsWhen you evaluate the hardware needed by a new business application, you should

investigate specific physical and performance characteristics for each computer

system or peripheral component to be acquired. Specific questions must be

answered concerning many important factors. Ten of these hardware evaluation factors and questions are summarized in the figure.

Figure: A summary of ten major hardware evaluation factors.Hardware Evaluation Factors RatingPerformanceWhat are the speed, capacity, and throughput?CostWhat is its lease or purchase price? What will be its cost of operations and maintenance?Reliability What are the risk of malfunction and its maintenance requirements?What are its error control and diagnostic features?CompatibilityIs it compatible with existing hardware and software? Is it compatible with hardware and software provided by competing suppliers?TechnologyIn what year of its product life cycle is it? Does it use a new untested technology or does it run the risk of obsolescence?ErgonomicsHas it been “human factors engineered” with the user in mind? Is it user-friendly, designed to be safe, comfortable, and easy to use?ConnectivityCan it be easily connected to wide area and local area networks that use different types of network technologies and bandwidth alternatives?ScalabilityCan it handle the processing demands of a wide range of end users, transactions, queries, and other information processing requirements?Software

Page 18: Data processing in Industrial Systems course notes after week 5

Is the system and application software available that can best use this hardware?SupportAre the services required to support and maintain it available?Overall Rating

Notice that there is much more to evaluating hardware than determining the fastest

and cheapest computing device. For example, the question of obsolescence must be

addressed by making a technology evaluation. The factor of ergonomics is also very

important. Ergonomic factors ensure that computer hardware and software are user-

friendly, that is, safe, comfortable, and easy to use. Connectivity is another important

evaluation factor, since so many network technologies and bandwidth alternatives

are available to connect computer systems to the Internet, intranet, and extranet

networks.

Software evaluation factorsYou should evaluate software according to many factors that are similar to those

used for hardware evaluation. Thus, the factors of performance, cost, reliability,

availability, compatibility, modularity, technology, ergonomics, and support should be

used to evaluate proposed software acquisitions. In addition, however, the software evaluation factors summarized in figure must also be considered. You should

answer the questions they generate in order to properly evaluate software

purchases. For example, some software packages are notoriously slow, hard to use,

bug-filled, or poorly documented. They are not good choice, even if offered at

attractive prices.

Figure: A summary of ten major software evaluation factors.Software Evaluation Factors RatingQuality Is it bug-free, or does it have many errors in its program code?Efficiency Is the software a well-developed system of program code that does not use much CPU time, memory capacity, or disk space?Flexibility Can it handle our e-business processes easily, without major modification?Security Does it provide control procedures for errors, malfunctions, and improper use?ConnectivityIs it Web-enabled so it can easily access the Internet, intranets, and extranets, on its own, or by working with Web browsers or other network software?Language Is it written in a programming language that is familiar to our own software developers?DocumentationIs the software well documented? Does it include help screens and helpful software agents?

Page 19: Data processing in Industrial Systems course notes after week 5

Hardware Does existing hardware have the features required to best use this software?Other factorsWhat are its performance, cost, reliability, availability, compatibility, modularity, technology, ergonomics, scalability, and support characteristics?Overall Rating

Evaluating IS servicesMost suppliers of hardware and software products and many other firms offer a

variety of IS services to end users and organizations. Examples include assistance

during e-commerce website development, installation or conversion of new hardware

and software, employee training, and hardware maintenance. Some of these

services are provided without cost by hardware manufacturers and software

suppliers.

Other types of IS services needed by a business can be outsourced to an outside

company for a negotiated price. For example, systems integrators take over

complete responsibility for an organization’s computer facilities when an organization

outsources its computer operations. They may also assume responsibility for

developing and implementing large systems development projects that involve many

vendors and subcontractors. Value-added resellers (VARs) specialize in providing

industry-specific hardware, software, and services from selected manufacturers.

Many other services are available to end users, including systems design, contract

programming, and consulting services. Evaluation factors and questions for IS

services are summarized in the figure.

Figure: Evaluation factors for IS services.Evaluation Factors for IS Services RatingPerformance What has been their past performance in view of their past promises?Systems developmentAre website and other e-business developers available? What are their quality and cost?Maintenance Is equipment maintenance provided? What are its quality and cost?Conversion What systems development and installation services will they provide during the conversion period?Training Is the necessary training of personnel provided? What are its quality and cost?Backup (timed backup)Are similar computer facilities available nearby for emergency backup purposes?Accessibility Does the vendor provide local or regional sites that offer sales, systems development, and hardware maintenance services? Is a customer support center at the vendor’s website

user, 11.12.2007,
WEB SİTEMİN GÜNCELLENMESİ
user, 11.12.2007,
Can u give software examples?
Page 20: Data processing in Industrial Systems course notes after week 5

available? Is a customer hot line provided?Business positionIs the vendor financially strong, with good industry market prospects?Hardware Do they provide a wide selection of compatible hardware devices and accessories?SoftwareDo they offer a variety of useful e-business software and application packages?Overall Rating

ASP: Application Service ProviderOther implementation activities

How to design forms: DESIGN A PURCHASING (CUSTOMER) ORDER FORM

How to code materials:

• Code must be unique.• First use main groups, Then the sub groups• Code considering extentions in time for material/product. (Code should

be open-ended)• Don’t use /,:, %, ; and other specific characters• Don’t use roman characters (XII, V) • If the code is too long, it is hard to remember (TALKING CODES)• If possible, use national and int. compatible standards such as barcodes,

etc.

DE 31030, FE 43002

BAR CODE CONTROL:100315421004 31+0+0+3…. = 21 2+1 = 3

Testing, documentation, and training are the keys to successful implementation of a

new business system.

Testing

user, 11.12.2007,
WEB STEMİN HACKLENMESİ
Page 21: Data processing in Industrial Systems course notes after week 5

System testing may involve testing website performance, testing and debugging

software, and testing new hardware. An important part of testing is the review of

prototypes of displays, reports, and other output. Prototypes should be reviewed by

end users of the proposed systems for possible errors. Of course, testing should not

occur only during the system’s implementation stage, but throughout the system’s

development process. For example, you might examine and critique prototypes of

input documents, screen displays, and processing procedures during the systems

design stage. Immediate end user testing is one of the benefits of a prototyping

process.

They say 30 % of software in NASA is written in FORTRAN.

If it works, don’t touch.

Software testing

Testing methods [ edit]

Static vs. dynamic testing [ edit]

There are many approaches available in software testing. Reviews, walkthroughs, or inspections are referred to as static testing, whereas actually executing programmed code with a given set of test cases is referred to as dynamic testing. Static testing is often implicit, as proofreading, plus when programming tools/text editors check source code structure or compilers (pre-compilers) check syntax and data flow as static program analysis. Dynamic testing takes place when the program itself is run. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs/drivers or execution from a debugger environment.

Static testing involves verification, whereas dynamic testing involves validation. Together they help improve software quality. Among the techniques for static analysis, mutation testing can be used to ensure the test cases will detect errors which are introduced by mutating the source code.

The box approach [ edit]

Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases.

Page 22: Data processing in Industrial Systems course notes after week 5

White-box testing[edit]Main article: White-box testing

White-box testing (also known as clear box testing, glass box testing, transparent box testing and structural testing, by seeing the source code) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).

While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.

Techniques used in white-box testing include:

API testing  – testing of the application using public and private APIs (application programming interfaces)

Code coverage  – creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once)

Fault injection  methods – intentionally introducing faults to gauge the efficacy of testing strategies

Mutation testing  methods Static testing  methods

Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[22] Code coverage as a software metric can be reported as a percentage for:

Function coverage, which reports on functions executed Statement coverage, which reports on the number of lines executed to complete the

test Decision coverage, which reports on whether both the True and the False branch of a

given test has been executed

100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.

Black-box testing[edit]Main article: Black-box testing

Black box diagram

Black-box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation, without seeing the source code. The testers are only aware of what the software is supposed to do, not how it does it.[23] Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition

Page 23: Data processing in Industrial Systems course notes after week 5

tables, decision table testing, fuzz testing, model-based testing, use casetesting, exploratory testing and specification-based testing.

Specification-based testing aims to test the functionality of software according to the applicable requirements.[24] This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.

Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[25]

One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[26] Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case, or leaves some parts of the program untested.

This method of test can be applied to all levels of software testing: unit, integration, system and acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.

Visual testing[edit]

The aim of visual testing is to provide developers with the ability to examine what was happening at the point of software failure by presenting the data in such a way that the developer can easily find the information she or he requires, and the information is expressed clearly.[27][28]

At the core of visual testing is the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. Visual testing therefore requires the recording of the entire test process – capturing everything that occurs on the test system in video format. Output videos are supplemented by real-time tester input via picture-in-a-picture webcam and audio commentary from microphones.

Visual testing provides a number of advantages. The quality of communication is increased drastically because testers can show the problem (and the events leading up to it) to the developer as opposed to just describing it and the need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he or she requires of a test failure and can instead focus on the cause of the fault and how it should be fixed.

Visual testing is particularly well-suited for environments that deploy agile methods in their development of software, since agile methods require greater communication between testers and developers and collaboration within small teams.[citation needed]

Ad hoc testing and exploratory testing are important methodologies for checking software integrity, because they require less preparation time to implement, while the important bugs can be found quickly. In ad hoc testing, where testing takes place in an improvised, impromptu way, the ability of a test tool to visually record everything that occurs on a system becomes very important in order to document the steps taken to uncover the bug.[clarification needed]

[citation needed]

Visual testing is gathering recognition in customer acceptance and usability testing, because the test can be used by many individuals involved in the development process. [citation needed] For

Page 24: Data processing in Industrial Systems course notes after week 5

the customer, it becomes easy to provide detailed bug reports and feedback, and for program users, visual testing can record user actions on screen, as well as their voice and image, to provide a complete picture at the time of software failure for the developers.

Further information: Graphical user interface testingGrey-box testing[edit]Main article: Gray box testing

Grey-box testing (American spelling: gray-box testing) involves having knowledge of internal data structures and algorithms for purposes of designing tests, while executing those tests at the user, or black-box level. The tester is not required to have full access to the software's source code.[29][not in citation given] Manipulating input data and formatting output do not qualify as grey-box, because the input and output are clearly outside of the "black box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test.

However, tests that require modifying a back-end data repository such as a database or a log file does qualify as grey-box, as the user would not normally be able to change the data repository in normal production operations.[citation needed] Grey-box testing may also include reverse engineeringto determine, for instance, boundary values or error messages.

By knowing the underlying concepts of how the software works, the tester makes better-informed testing choices while testing the software from outside. Typically, a grey-box tester will be permitted to set up an isolated testing environment with activities such as seeding a database. The tester can observe the state of the product being tested after performing certain actions such as executing SQL statements against the database and then executing queries to ensure that the expected changes have been reflected. Grey-box testing implements intelligent test scenarios, based on limited information. This will particularly apply to data type handling, exception handling, and so on.[30]

Testing levels [ edit]

There are generally four recognized levels of tests: unit testing, integration testing, component interface testing, and system testing. Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. The main levels during the development process as defined by the SWEBOK guide are unit-, integration-, and system testing that are distinguished by the test target without implying a specific process model.[31] Other test levels are classified by the testing objective.[31]

There are two different levels of tests from the perspective of customers: low-level testing (LLT) and high-level testing (HLT). LLT is a group of tests for different level components of software application or product. HLT is a group of tests for the whole software application or product.[citation needed]

Unit testing [ edit]

Main article: Unit testing

Unit testing, also known as component testing, refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.[32]

These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the

Page 25: Data processing in Industrial Systems course notes after week 5

functionality of a piece of software, but rather is used to ensure that the building blocks of the software work independently from each other.

Unit testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Unit testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.

Depending on the organization's expectations for software development, unit testing might include static code analysis, data-flow analysis, metrics analysis, peer code reviews, code coverage analysis and other software verification practices.

Integration testing [ edit]

Main article: Integration testing

Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed.

Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.[33]

Component interface testing [ edit]

The practice of component interface testing can be used to check the handling of data passed between various units, or subsystem components, beyond full integration testing between those units.[34][35] The data being passed can be considered as "message packets" and the range or data types can be checked, for data generated from one unit, and tested for validity before being passed into another unit. One option for interface testing is to keep a separate log file of data items being passed, often with a timestamp logged to allow analysis of thousands of cases of data passed between units for days or weeks. Tests can include checking the handling of some extreme data values while other interface variables are passed as normal values.[34] Unusual data values in an interface can help explain unexpected performance in the next unit. Component interface testing is a variation of black-box testing,[35] with the focus on the data values beyond just the related actions of a subsystem component.

System testing [ edit]

Main article: System testing

System testing, or end-to-end testing, tests a completely integrated system to verify that the system meets its requirements.[36] For example, a system test might involve testing a logon interface, then creating and editing an entry, plus sending or printing results, followed by summary processing or deletion (or archiving) of entries, then logoff.

Operational Acceptance testing [ edit]

Main article: Operational acceptance testing

Page 26: Data processing in Industrial Systems course notes after week 5

Operational Acceptance is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. Hence, it is also known as operational readiness testing (ORT) or Operations readiness and assurance (OR&A) testing. Functional testing within OAT is limited to those tests which are required to verify the non-functional aspects of the system.

In addition, the software testing should ensure that the portability of the system, as well as working as expected, does not also damage or partially corrupt its operating environment or cause other processes within that environment to become inoperative.[37]

Bugzilla.

Bugzilla is a Web-based general-purpose bugtracker and testing tool originally developed and used by the Mozilla project, and licensed under the Mozilla Public License.

Released as open source software by Netscape Communications in 1998, it has been adopted by a variety of organizations for use as a bug tracking system for both free and open source software and proprietary projects and products. Bugzilla is used, among others, by Mozilla Foundation, Wikimedia Foundation, WebKit, NASA, Yahoo!, GNOME, KDE, Red Hat and Novell.[2]

DocumentationDeveloping good user documentation is an important part of the implementation

process. Sample data entry display screens, forms, and reports are good examples

of documentation. When computer-aided systems engineering methods are used,

documentation can be created and changed easily since it is stored and accessible

on disk in a system repository. Documentation serves as a method of communication

among the people responsible for developing, implementing, and maintaining a

computer-based system. Installing and operating a newly designed system or

modifying an established application requires a detailed record of that system’s

design. Documentation is extremely important in diagnosing errors and making

Page 27: Data processing in Industrial Systems course notes after week 5

changes, especially if the end users or systems analyst who developed a system are

no longer with the organization.

Training Training is a vital implementation activity. IS personnel, such as user consultants,

must be sure that end users are trained to operate a new e-business system or its

implementation will fail. Training may involve only activities like data entry, or it may

also involve all aspects of the proper use of a new system. In addition, managers and

end users must be educated in how the new technology impacts the company’s

business operations and management. This knowledge should be supplemented by

training programs for any new hardware devices, software packages, and their use

for specific work activities.

- Seminars

o General

- On the job training

o Specific

Conversion methodsThe initial operation of a new business system can be a difficult task. This typically

requires a conversion process from the use of a present system to the operation of

a new or improved application. Conversion methods can soften the impact of

introducing new information technologies into an organization. Four major forms of

system conversion are illustrated in figure.

Figure: Conversion methods.

Old System

New System

Parallel

Pilot

Page 28: Data processing in Industrial Systems course notes after week 5

Old System New System

Old SystemNew System

Old System New System

Conversions can be

Parallel

Phased

Pilot

Plunge or direct cutover

Conversions can be done on a parallel basis, whereby both the old and the new

systems are operating until the project development team and end user management

agree to switch completely over to the new system. It is during this time that the

operations and results of both systems are compared and evaluated. Errors can be

identified and corrected, and the operating problems can be solved before the old

system is abandoned. Installation can also be accomplished by a direct cutover or

plunge to a newly developed system.

Conversion can also be done on a phased basis, where only parts of a new

application or only a few departments, branch offices, or plant locations at a time are

converted. A phased conversion allows a gradual implementation process to take

place within an organization. Similar benefits accrue from using a pilot conversion,

were one department or other work site serves as a test site. A new system can be

tried out at this site until developers feel it can be implemented through out the

organization.

IS maintenance

Phased

Plunge (direct)

Page 29: Data processing in Industrial Systems course notes after week 5

Once a system is fully implemented and is being used in business operations, the

maintenance function begins. System maintenance is the monitoring, evaluating,

and modifying of operational business systems to make desirable or necessary

improvements. For example, the implementation of a new system usually results in

the phenomenon known as the learning curve. Personnel who operate and use the

system will make mistakes simply because they are not familiar with it. Though such

errors usually diminish as experience is gained with a new system, they do point out

areas where a system may be improved.

Maintenance is also necessary for other failures and problems that arise during the

operation of a system. End users and information systems personnel then perform a

troubleshooting function to determine the causes of and solutions to such problems.

The maintenance activity includes a post-implementation review process to ensure

that newly implemented systems meet the business objectives established for them.

Errors in the development or use of a system must be corrected by the maintenance

process. This includes a periodic review or audit of a system to ensure that it is

operating properly and meeting its objectives. This audit is in addition to continually

monitoring a new system for potential problems or necessary changes.

Maintenance also includes making modifications to an established system due to

changes in the business organization or the business environment. For example,

new tax legislation, company reorganizations, and new e-commerce initiatives may

require major changes to current business system.

E-learning;

ITU Hospital Management Certificate Program

http://www.enoctaakademi.com

Online analytical processingFrom Wikipedia, the free encyclopedia

  (Redirected from OLAP)

In computing, online analytical processing, or OLAP (pronounced /ˈoʊlæp/) is an approach to swiftly answer multi-

dimensional analytical queries.[1] OLAP is part of the broader category of business intelligence, which also

encompasses relational reporting and data mining.[2] Typical applications of OLAP include business reporting for

sales, marketing,management reporting, business process management (BPM)[3], budgeting and forecasting, financial

Page 30: Data processing in Industrial Systems course notes after week 5

reporting and similar areas, with new applications coming up, such as agriculture [4]. The term OLAP was created as a

slight modification of the traditional database term OLTP (Online Transaction Processing).[5]

Databases configured for OLAP use a multidimensional data model, allowing for complex analytical and ad-hoc queries

with a rapid execution time. They borrow aspects of navigational databases and hierarchical databases that are faster

than relational databases.[6]

The output of an OLAP query is typically displayed in a matrix (or pivot) format. The dimensions form the rows and

columns of the matrix; the measures form the values.

Data warehouseFrom Wikipedia, the free encyclopedia

Data Warehouse Overview

In computing, a data warehouse or enterprise data warehouse (DW, DWH, or EDW) is a database used

for reporting and data analysis. It is a central repository of data which is created by integrating data from

multiple disparate sources. Data warehouses store current as well as historical data and are used for

creating trending reports for senior management reporting such as annual and quarterly comparisons.

The data stored in the warehouse are uploaded from the operational systems (such as marketing, sales

etc., shown in the figure to the right). The data may pass through an operational data store for additional

operations before they are used in the DW for reporting.

The typical ETL-based data warehouse uses staging, integration, and access layers to house its key

functions. The staging layer or staging database stores raw data extracted from each of the disparate

source data systems. The integration layer integrates the disparate data sets by transforming the data from

the staging layer often storing this transformed data in an operational data store (ODS) database. The

integrated data are then moved to yet another database, often called the data warehouse database, where

the data is arranged into hierarchical groups often called dimensions and into facts and aggregate facts.

The combination of facts and dimensions is sometimes called a star schema. The access layer helps users

retrieve data.[1]

Page 31: Data processing in Industrial Systems course notes after week 5

A data warehouse constructed from an integrated data source systems does not require ETL, staging

databases, or operational data store databases. The integrated data source systems may be considered to

be a part of a distributed operational data store layer. Data federation methods or data

virtualization methods may be used to access the distributed integrated source data systems to consolidate

and aggregate data directly into the data warehouse database tables. Unlike the ETL-based data

warehouse, the integrated source data systems and the data warehouse are all integrated since there is no

transformation of dimensional or reference data. This integrated data warehouse architecture supports

the drill down from the aggregate data of the data warehouse to the transactional data of the integrated

source data systems.

Data warehouses can be subdivided into data marts. Data marts store subsets of data from a warehouse.

This definition of the data warehouse focuses on data storage. The main source of the data is cleaned,

transformed, cataloged and made available for use by managers and other business professionals for data

mining, online analytical processing, market research and decision support (Marakas & O'Brien 2009).

However, the means to retrieve and analyze data, to extract, transform and load data, and to manage

the data dictionary are also considered essential components of a data warehousing system. Many

references to data warehousing use this broader context. Thus, an expanded definition for data

warehousing includes business intelligence tools, tools to extract, transform and load data into the

repository, and tools to manage and retrieve metadata.

Data martFrom Wikipedia, the free encyclopedia

A data mart is the access layer of the data warehouse environment that is used to get data out to the

users. The data mart is a subset of the data warehouse that is usually oriented to a specific business line or

team. In some deployments, each department or business unit is considered the owner of its data mart

including all the hardware, software and data.[1] This enables each department to use, manipulate and

develop their data any way they see fit; without altering information inside other data marts or the data

warehouse. In other deployments where conformed dimensions are used, this business unit ownership will

not hold true for shared dimensions like customer, product, etc.

The related term spreadmart describes the situation that occurs when one or more business analysts

develop a system of linked spreadsheets to perform a business analysis, then grow it to a size and degree

of complexity that makes it nearly impossible to maintain.

The primary use for a data mart is business intelligence (BI) applications. BI is used to gather, store, access

and analyze data. The data mart can be used by smaller businesses to utilize the data they have

accumulated. A data mart can be less expensive than implementing a data warehouse, thus making it more

Page 32: Data processing in Industrial Systems course notes after week 5

cost effective for the small business. A data mart can also be set up in much less time than a data

warehouse, being able to be set up in less than 90 days. Since most small businesses only have use for a

small number of BI applications, the low cost and quick set up of the data mart makes it a suitable method

for storing data.[2]

Big dataFrom Wikipedia, the free encyclopedia

A visualization created by IBM of Wikipedia edits. At multiple terabytes in size, the text and images of Wikipedia are a classic

example of big data.

In information technology, big data[1][2] is a collection of data sets so large and complex that it becomes

difficult to process using on-hand database management tools. The challenges include capture, curation,

storage,[3] search, sharing, analysis,[4] and visualization. The trend to larger data sets is due to the additional

information derivable from analysis of a single large set of related data, as compared to separate smaller

sets with the same total amount of data, allowing correlations to be found to "spot business trends,

determine quality of research, prevent diseases, link legal citations, combat crime, and determine real-time

roadway traffic conditions."[5][6][7]

As of 2012, limits on the size of data sets that are feasible to process in a reasonable amount of time were

on the order of exabytes of data.[8][9] Scientists regularly encounter limitations due to large data sets in many

areas, including meteorology, genomics,[10] connectomics, complex physics simulations,[11] and biological

and environmental research.[12] The limitations also affect Internet search, finance andbusiness informatics.

Data sets grow in size in part because they are increasingly being gathered by ubiquitous information-

sensing mobile devices, aerial sensory technologies (remote sensing), software logs, cameras,

microphones, radio-frequency identification readers, and wireless sensor networks.[13][14] The world's

technological per-capita capacity to store information has roughly doubled every 40 months since the

1980s;[15] as of 2012, every day 2.5 quintillion (2.5×1018) bytes of data were created.[16]

Page 33: Data processing in Industrial Systems course notes after week 5

Big data is difficult to work with using relational databases and desktop statistics and visualization

packages, requiring instead "massively parallel software running on tens, hundreds, or even thousands of

servers".[17] What is considered "big data" varies depending on the capabilities of the organization managing

the set. "For some organizations, facing hundreds of gigabytes of data for the first time may trigger a need

to reconsider data management options. For others, it may take tens or hundreds of terabytes before data

size becomes a significant consideration."[18]

Agile software developmentFrom Wikipedia, the free encyclopedia

Agile software development poster

Generic diagram of an agile methodology for software development

Agile software development is a group of software development methods based on iterative and

incremental development, where requirements and solutions evolve through collaboration between self-

organizing, cross-functional teams. It promotes adaptive planning, evolutionary development and delivery,

a time-boxed iterative approach, and encourages rapid and flexible response to change. It is a conceptual

Page 34: Data processing in Industrial Systems course notes after week 5

framework that promotes foreseen interactions throughout the development cycle. The Agile

Manifesto[1] introduced the term in 2001.

BALANCED SCORECARD

AI: ARTIFICIAL INTELLIGENCE

http://easydiagnosis.com/ expert system

Recommender systemFrom Wikipedia, the free encyclopedia

  (Redirected from Recommendation systems)

Recommender systems or recommendation systems (sometimes replacing "system" with a synonym

such as platform or engine) are a subclass of information filtering system that seek to predict the 'rating' or

'preference' that user would give to an item (such as music, books, or movies) or social element

(e.g. people or groups) they had not yet considered, using a model built from the characteristics of an item

(content-based approaches) or the user's social environment (collaborative filtering approaches). [1][2]

Recommender systems have become extremely common in recent years. A few examples of such systems:

When viewing a product on Amazon.com, the store will recommend additional items based on a

matrix of what other shoppers bought along with the currently selected item.[3]

Pandora Radio takes an initial input of a song or musician and plays music with similar

characteristics (based on a series of keywords attributed to the inputted artist or piece of music). The

stations created by Pandora can be refined through user feedback (emphasizing or deemphasizing

certain characteristics).

Netflix offers predictions of movies that a user might like to watch based on the user's previous

ratings and watching habits (as compared to the behavior of other users), also taking into account the

characteristics (such as the genre) of the film.

http://p1m1.com/solutions/ DECISION SUPPORT SYSTEMSSAS is a software suite developed by SAS Institute for advanced analytics, business intelligence, data management,

and predictive analytics. Statistical analysis Software

http://ecommerce-software-review.toptenreviews.com/

Page 35: Data processing in Industrial Systems course notes after week 5