47
Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli Bhoor S Raj Meena

Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli Bhoor S Raj Meena

  • Upload
    yaakov

  • View
    39

  • Download
    3

Embed Size (px)

DESCRIPTION

Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli Bhoor S Raj Meena. The human decision making ability is a non-trivial activity and it would be an AI marvel to capture it Every human being has a notion of having made a decision - PowerPoint PPT Presentation

Citation preview

Page 1: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Speakers:

Surinderjeet SinghAmbuj Pushkar Ojha

Ilyeech Kishore Rapelli Bhoor S Raj Meena

Page 2: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

The human decision making ability is a non-trivial activity and it would be an AI marvel to capture it

Every human being has a notion of having made a decision

Before making any decision, people reason

We will look at human decision making with the intention of mimicking it through AI

First lets look at some of varying views of AI

Page 3: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Science of designing and building computer based artifacts which can perform various human tasks

this view has few links with decision making

a decision, if any, has previously been made by the designer of the system

All in all, the concept of 'decision' is in conflict with the idea of a program

Page 4: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Science aimed at mimicking human beings

Needs to incorporate human decision making skills

Human beings have preferences and make subjective decisions

AI becomes a subjective science rather than generic science

Page 5: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Current state of the world

More desirable (future) state

Decision problem

Page 6: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Before making a decision, the subject recognizes the current state, which contains information about the past and the present

keeping in mind his perception of the current state, the subject tries to identify it with reference to his experience (recorded states)

The first phase of decision is then to find one or more recorded states close to the perceived current state. This is called 'pattern matching' or 'diagnosis‘

Page 7: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Let ‘E’ (expectations) be a representation of the future events uncontrolled and uninfluenced by the subject

‘A’ denote the set of all possible actions FS(Si, A, E) defines the set of all future

states attainable from the current state Si Depending on the various expectations, many states are attainable with different probabilities

the preferred outcome, of all the outcomes OC, defines the action to be chosen

Page 8: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Perceived current state

encompassing past and present

ActionsA

File of recorded

states

File of recorded

states

Recognized Future States

(Subjective) Expectations

E

(Subjective)Preferences

P

Diagnosis

Look Ahead

Chosen Action

Page 9: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

the current state may not known with certainty and may not be unique

future states may not be known with certainty

action (or alternative) set is not given and can be changed during the process of reasoning

many real decision makers study just a small subset of the possible alternatives.

the states of nature or the consequences are not easy to determine

decision process is not linear, many back tracks can occur

distinction between action and outcome may become vague

Page 10: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

We may have very reactive systems, where each action is almost immediately followed by a modification of the state and a new decision

In such systems, the role of the environment is very weak

two types of decisions on basis of granularity of time difference between consecutive decisions - 'almost continuous decision' and 'discrete decision‘

The significance of an outcome for a subject frequently involves many sub-actions leading to the outcome

sequence of sub-actions intertwined with events gives a scenario

Page 11: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena
Page 12: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Diagnosis is the problem of recognizing the current state as

accurately as possible.Current state is subject to evaluation and comparison.Classification is a tool for decision making. This method can be applied if the number of actions is finite and phi exists. The decision can thus be phi(Si) = Aj, based on the attributes characterizing Si. phi-1(Aj) realizes a partition of S in this case.

Page 13: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

An Expert system is essentially a diagnostic machine. Here we consider the semantics of their working.The input facts describe a given situation (current state) and the output is either a diagnosis (eg the patient is suffering from a certain illness (MYCIN), the situation of the client is quite good (Risk Manager) etc) or a recommendation or an action (eg increase the flow of oil, decrease the temperature of the kiln etc)

Page 14: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

DiagnosisLooking Ahead

Preferences

Simplified Decision ProcessAs shown in the figure expert systems shunt the look-ahead step.

Chosen

Action

Current State

Outcome Set

phi

Page 15: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Rough Set Theory : The Rough Sets methodology provides definitions and methods for finding which attributes separates one class or classification from another. Since inconsistencies are allowed and membership in a set does not have to be absolute, the potential for handling noise gracefully is big.

Page 16: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Example: Consider a table:

Id %Gln %Pro +Chg Structure

1 12 6 0.2 all-a

2 12 6 0.2 all-a

3 8 6 0.2 all-b

4 12 2 0.12 a/b

5 12 2 0.12 a+b

6 12 2 0.12 a+b

Page 17: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Using this training data we want to use rough sets to derive some rules that will enable us to determine the structure of a novel protein given the attributes describing that protein.

In rough set speak, the structure will be the decision attribute and % Gln, % Pro and + Chg are the condition attributes.

Page 18: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Note that equivalence classes can contain ids that have different decision attributes (i.e. structures).

The next step is to construct a discernability matrix, which is:

E1 E2 E3

E1 - %Gln %Pro, +Chg

E2 %Gln - %Gln, %Pro, +Chg

E3 %Pro, +Chg %Gln, %Pro, +Chg -

Page 19: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

The first step is to determine equivalence classes:

Equivalence Class %Gln %Pro +Chg

E1(ids 1 & 2) 12 6 0.2

E2(id 3) 8 6 0.2

E3(id 4, 5 & 6) 12 2 0.12

Page 20: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

The axes are the equivalence classes and the cells contain the condition attributes that differentiate between those classes.

The relative discernability functions here are:

* f(E1) = (% Gln) AND (% Pro OR +Chg) * f(E2) = (% Gln) AND (% Gln OR % Pro

OR +Chg) * f(E3) = (% Pro OR +Chg) AND (% Gln

OR % Pro OR +Chg)

Page 21: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

The relative reduct is calculated by taking the relative discernability functions and removing superfluous attributes:

* RED(E1) = (% Gln AND % Pro) OR (% Gln AND +Chg)

* RED(E2) = % Gln * RED(E3) = % Pro OR +Chg Now to derive some rules from our reducts

we need to bind the condition attribute values of the equivalence class from which the reduct originated to the corresponding attributes of the reduct.

Page 22: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Using the rules we have generated we can determine the structure of novel proteins.

Each type of structure will have -a lower approximation - the set of proteins

which definitely have that structure - an upper approximation - the set of

proteins which may possibly have that structure

-a boundary region, the set of proteins whose structure cannot be proven either way.

Proteins can belong to more than one set.

Page 23: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

e.g., all of the proteins making up equivalence class E1 have the condition attributes 12%, 6% and 0.2 for % Gln, % Pro and +Chg respectively. We can feed those values into RED(E1) to derive a relevant rule.

* from RED(E1) : if Gln = 12 and Pro = 6 => structure = all-a

* from RED(E1) : if Gln = 12 and+Chg = 0.2 => structure = all-a

* from RED(E2) : if Gln = 8 => structure = all-b * from RED(E3) : if Pro = 2 => structure = (2/3 chance

of a+b, 1/3 chance a / b) * from RED(E3) : if+Chg = 0.12==> structure = (2/3

chance of a+b, 1/3 chance a / b)

Page 24: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Difference between a Rough Set and a

Crisp(Normal) Set A rough set consists of a tuple <Pl, Pu>

where Pl, Pu are the crisp sets pertaining to the lower and upper approximation for a protein structure.

Eg Pl(all-a)={E1}, Pu(all-a)={E1}→Accuracy=1

Pl(a+b) = {}, Pu(a+b) = {E3} → Accuracy=0

Hence a rough set for all-a protein structure would be {{E1}, {E1}}

Page 25: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

What are Goals and Plans? Planning with certainty

•STRIPS•Soar

Planning with uncertainty•Non-monotonic logics•Decision-theoretic Planning

Page 26: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

The goal is the 'outcome' that the decision maker wants to obtain.

Planning is, given a goal and the current state, finding a sequence of actions (or sub-actions) which leads to the goal from the current state.

Enrichment of notions of goal and plan.

Page 27: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

A goal reduced to a particular outcome, does not involve other human decisions like intention and commitment.

A way of encoding intention is Preferences

Jon Doyle proposes priorities instead, but distinction between preferences and priorities is not clear in many 'intelligent' systems that used them.

Page 28: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

More complex utility function capable of dealing with:• Pursuing several goals• Consumption of resources by (sub-)goals

satisfied The utility function is simply the

weighted sum of the utilities of the various goals minus a term depending on the resources used by the partial outcomes possibly attained with probabilities.

Page 29: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

State consists of two parts: the state of the environment, the agent's state among all possible mental attitudes.

A mental attitude consists of beliefs, desires, intentions expressed possibly by probabilities, utilities, priorities respectively.

If utilities and priorities are included into utilities and probabilities into expectations, this model is very similar to Decision-theoretic Model described previously.

Page 30: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Many algorithms are based on regressive search

Best example: STRIPS (Nilsson) In STRIPS, a stack of (sub-)goals to be

realized is maintained. Sub-goals as predicted by the LHS of a

rule that's leading to a goal, are piled above the goal.

Continue until the current state matches the precondition of a sub-goal completely.

Page 31: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Most applications of STRIPS function in fully observable worlds. e.g. Robot block problem

No preferences of goals. Just one goal is set.

Still, regressive search is a useful & very basic mechanism in Decision Theory esp. Human Decision Making since it's very human to think goal-driven.

Page 32: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

AI proposes two solutions for decision making in dynamic worlds:•Planning under Uncertainty (or Decision-

theoretic Planning)•Reactive Systems

Reactive Systems can generate meaningful meta-behaviour from very individualistic decisions (e.g. 'Invisible Hand' in Economics)

Page 33: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Two views (contrasted to ways) of planning under uncertainty:•The first one based on non-monotonic logics•The second one based on the theory of

decision Doyle and Wellman concluded, based

on Arrow's impossibility theorem, that "there is probably no universally acceptable method for rationally resolving conflicts in default reasoning"

Page 34: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

A default theory is a pair <D, W>. W is a set of logical formulae, called the background theory, that formalize the facts that are known for sure. D is a set of default rules, each one being of the form: PreReq: {Justif} / Conclusion.

The rule is to be interpreted as, if PreReq is true, {Justif} does not conflict with W, then Conclusion is believed to be true.

e.g. D= {Bird(X):Flies(X)/Flies(X)} W={Bird(Penguin), Bird(Kingfisher), ~Flies(Penguin)}

Page 35: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

No voting system can convert the ranked preferences of individuals into a community-wide ranking while also meeting a certain set of reasonable criteria with three or more discrete options to choose from.

e.g. Criteria: if you have an election where C wins among A and C, and you introduce a new candidate B. Then either C should still win, or B should now win.

Preferences: 40 A B C 35 B C A25 C A B

Consider, "what if B wasn't running?" You would have had an election like this 40 A C 60 C A

Page 36: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Doyle and Wellman (1989) have noticed that expressing a default rule may be interpreted as expressing preferences between propositions: The subject prefers to believe R more than (not Q).

So, Doyle concludes by this very similarity between preferences and default logic rules, rational reasoning can't be achieved by default logic.

Page 37: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Decision Machine

•One to one correspondence between diagnosed current state and an action.•Improper decision•continuous decision•Programmed decision machine•relating current state to action does not capture all complexity of human decision•undesirable effects

Page 38: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

•human decision maker is always indispensable >>set of all the possible current states can not be described either extensionally or intentionally.•(Reason: unexpected states)

•challenges in the development of decision support system:•The designer of decision support system are therefore confronted with the paradoxical problem to develop system capable of helping people in future situation

Page 39: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

The What-if Analysis:

• The dissatisfaction stems from what we have identified as lookahead reasoning

• event are either very interdependent and the probability remain unknown(real situation:price of oil)

• second difficulty is predicting or even identifying all possible reaction of the other agents.

• The ability to perceive the future seems to be a phylogenetic acquisition.

• the capacity for anticipation and the ability to decide against immediate short term advantage to allow future gains.

Page 40: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Brain is:1)a predictive biological machine2)a simulator of action by referring to past experience3) extremely specialized circuits.

Free will :capacity to internally simulate action and make a decision this is what-if analysis.

Whatif analysis is the basis of human ability to perform lookahead reasoning.

Scenario reasoning: develop many scenarios and to assess ,at least approximately ,their probabilities.

the consequences of a choice in decision making amounts.

Page 41: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

It should produce two outputs:

•all possible outcomes at a given horizon•probability or plausibility of each outcome

why machines are necessary here:•scenario reasoning may lead to a combinatorial explosion•almost impossible to handle long , precise and diverse scenarios.

Page 42: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Look-ahead Machinenecessary capabilities.1)the ability to combine many action and

events (with probabilities or measures)2)ability to imagine the possible action and to

anticipate all possible reactions of the other agents or natures.

• Imagination ability is provides by file of recorded states.

• All possible events and reactions of the other agents are drawn from a set of memorized items.

• The intrinsic difficulty of forecasting is the main weakness of many formalized planning process.

Page 43: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Candidates for lookahead machines:1Simulation machines2DSS (decision support systems)

Simulation machine: •a real industrial or social process has been modeled on reduces scale•Some variables characterizing uncertain events are randomly fed in to the system according to a given probability law.•way of looking ahead when it is impracticable to model the process completely (hard modeling)•subjects are insensitive to the implication of feedback when medium or long delays occur between a decision and its effects.

Page 44: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

DSS: previously Dss stated as multimodal, interactive system to perform an exploration convinced as a lookaheadmachine

At input data level ,it is called what-if or sensitivity analysis.

at model level heuristic search allows the decision maker to explore different types of models to look ahead among the many possible situation that may occur.(air traffic control)

DSS is an incomplete one decision is left to the decision maker.to perform the numerous evaluations that occurs along the exploration process.

Rather than focus in on choice , designers would do better to make richer scenario by being able to produce and to handle complex actions and situations.

Page 45: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

AI contribution to DSS:•ability to put forward better and more sophisticated representation allowing more complex states and reasoning to handled.

•Modeling process of unstructured task(ex: expert technical system)

•Junction between the set of recorded states and the generation of the scenario.

•Link necessarily encompass some learning

•learning process plays a significant role in the human mind.

Page 46: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Decision theory and AI complement each other and are just beginning to merge

AI has devoted much attention on diagnosis and on representing human knowledge but not much work has gone into the look-ahead phase of decision making

As of know, most of AI work starts after a decision has been made, but one cannot simulate human reasoning without taking into uncertainty and preferences that goes into human decision making

In the end, we would like to say that there is no doubt that diagnosis plus look-ahead machines have a brilliant future, if not to mimic human reasoning, at least to support human decision.

Page 47: Speakers: Surinderjeet Singh Ambuj Pushkar Ojha Ilyeech Kishore Rapelli  Bhoor S Raj Meena

Artificial Intelligence and Human Decision Making (1997) by Jean-charles Pomerol, European Journal of Operational Research

Wikipedia – Decision Trees, Rough Sets, Arrow Theorem

Rough Sets: http://www.pw.ntnu.no/~hgs/project/report/node38.html