Vous êtes sur la page 1sur 40

CS2351 ARTIFICIAL INTELLIGENCE PART A TWO MARKS UNIT I 1) 1. What is AI?

? The study of how to make computers do things at which at the moment people are better. Systems that think like humans Systems that think rationally Systems that act like humans Systems that act rationally 2) Define an agent. An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

3) What is an agent function? An agents behavior is described by the agent function that maps any given percept sequence to an action. 4) Differentiate an agent function and an agent program. Agent Function Agent Program An abstract mathematical description A concrete implementation, running on the agent Architecture. 5) What can Ai do today?

6) What is a task environment? How it is specified? Task environments are essentially the "problems" to which rational agents are the "solutions." A Task environment is specified using PEAS (Performance, Environment, Actuators, Sensors) description. Give an example of PEAS description for an automated taxi.

7) Give PEAS description for different agent types.

8)

List the properties of task environments. Fully observable vs. partially observable. Deterministic vs. stochastic. Episodic vs. sequential Static vs, dynamic. Discrete vs. continuous. Single agent vs. multiagent 9) Write a function for the

table

driven

agent

10)

What are the four different kinds of agent programs?

Simple reflex agents; Model-based reflex agents; Goal-based agents; and Utility-based agents. 11) Explain a simple reflex agent with a diagram. Simple reflex agents The simplest kind of agent is the simple reflex agent. These agents select actions on the basis AGENT of the current percept, ignoring the rest of the percept history.

12)

Explain with a diagram the model based reflex agent.

13a) Explain with a diagram the goal based reflex agent.

Knowing about the current state of the environment is not always enough to decide what to do. For example, at a road junction, the taxi can turn left, turn right, or go straight on. The correct decision depends on where the taxi is trying to get to. In other words, as well as a current state description, the agent needs some sort of goal information that describes situations that are desirable-for example, being at the passenger's destination. 13b) What are utility based agents? Goals alone are not really enough to generate high-quality behavior in most environments. For example, there are many action sequences that will get the taxi to its destination (thereby achieving the goal) but some are quicker, safer, more reliable, or cheaper than others. A utility function maps a state (or a sequence of states) onto a real number, which describes the associated degree of happiness. 13c) What are learning agents? A learning agent can be divided into four conceptual components, as shown in Fig- 2.15. The most important distinction is between the learning element, which is re-ELEMENT sponsible for making improvements, and the performance element, which is responsible for selecting external actions. The performance element is what we have previously considered to be the entire agent: it takes in percepts and decides on actions. The learning element uses CRITIC feedback from the critic on how the agent is doing and determines how the performance element should be modified to do better in the future.

13) Define the problem solving agent. A Problem solving agent is a goal-based agent . It decide what to do by finding sequence of actions that lead to desirable states. The agent can adopt a goal and aim at satisfying it. Goal formulation is the first step in problem solving. 14) Define the terms goal formulation and problem formulation. Goal formulation,based on the current situation and the agents performance measure,is the first step in problem solving. The agents task is to find out which sequence of actions will get to a goal state.

Problem formulation is the process of deciding what actions and states to consider given a goal 15) List the steps involved in simple problem solving agent. (i) Goal formulation (ii) Problem formulation (iii) Search (iv) Search Algorithm (v) Execution phase 16) Define search and search algorithm. The process of looking for sequences actions from the current state to reach the goal state is called search. The search algorithm takes a problem as input and returns a solution in the form of action sequence. Once a solution is found, the execution phase consists of carrying out the recommended action.. 17) What are the components of well-defined problems? o The initial state that the agent starts in . The initial state for our agent of example problem is described by In(Arad) o A Successor Function returns the possible actions available to the agent. Given a state x,SUCCESSOR-FN(x) returns a set of {action,successor} ordered pairs where each action is one of the legal actions in state x,and each successor is a state that can be reached from x by applying the action. For example,from the state In(Arad),the successor function for the Romania problem would return { [Go(Sibiu),In(Sibiu)],[Go(Timisoara),In(Timisoara)],[Go(Zerind),In(Zerind)] } o Thr goal test determines whether the given state is a goal state. o A path cost function assigns numeric cost to each action. For the Romania problem the cost of path might be its length in kilometers. 18) Differentiate toy problems and real world problems. TOY PROBLEMS REAL WORLD PROBLEMS A toy problem is intended A real world problem is one to illustrate various problem whose solutions people actually solving methods. It can be care about. easily used by different researchers to compare the performance of algorithms. 19) Give examples of real world problems. (i) Touring problems (ii) Travelling Salesperson Problem(TSP) (iii) VLSI layout (iv) Robot navigation (v) Automatic assembly sequencing (vi) Internet searching 20) List the criteria to measure the performance of different search strategies. o Completeness : Is the algorithm guaranteed to find a solution when there is one? o Optimality : Does the strategy find the optimal solution? o Time complexity : How long does it take to find a solution? o Space complexity : How much memory is needed to perform the search? 21) Differentiate Uninformed Search(Blind search) and Informed Search(Heuristic Search) strategies. Uninformed or Blind Search Informed or Heuristic Search o No additional information o More effective beyond that provided in the o Uses problem-specific knowledge problem definition beyond the definition of the o Not effective problem itself. o No information about number of steps or path cost 22) Define Best-first-search.

Best-first search is an instance of the general TREE-SEARCH or GRAPH-SEARCH algorithm in which a node is selected for expansion based on the evaluation function f(n ). Traditionally, the node with the lowest evaluation function is selected for expansion 23) Define Artificial Intelligence formulated by Haugeland. The exciting new effort to make computers think machines with minds in the full and literal sense. 24) Define Artificial Intelligence in terms of human performance. The art of creating machines that perform functions that require intelligence when performed by people. 25) Define Artificial Intelligence in terms of rational acting. A field of study that seeks to explain and emulate intelligent behaviors in terms of computational processes-Schalkoff. The branch of computer science that is concerned with the automation of intelligent behavior-Luger&Stubblefield. 26) . Define Artificial in terms of rational thinking. he study of mental faculties through the use of computational modelsCharniak&McDermott. The study of the computations that make it possible to perceive, reason and act-Winston. 27) . What does Turing test mean? The Turing test proposed by Alan Turing was designed to provide a satisfactory operational definition of intelligence. Turing defined intelligent behavior as the ability to achieve human-level performance in all cognitive tasks, sufficient to fool an interrogator. Natural Language Processing :To enable it to communicate successfully in English. Knowledge Representation: To store information provided before or during interrogation. Automated Reasoning: To use the stored information to answer questions and to draw new conclusion. Machine Language: To adapt new circumstances and to detect and explorative pattern. 28) What capabilities computer should poses to behave humanly or to pass turing test approach? Natural Language Processing :To enable it to communicate successfully in English. Knowledge Representation: To store information provided before or during interrogation. Automated Reasoning: To use the stored information to answer questions and to draw new conclusion. Machine Language: To adapt new circumstances and to detect and explorative pattern. 29) Define rational agent. A rational agent is one that does the right thing. Here right thing is one that will cause agent to be more successful. That leaves us with the problem of deciding how and when to evaluate the agents success. 30) Define an Omniscient agent. An omniscient agent knows the actual outcome of its action and can act accordingly; but omniscience is impossible in reality. 31) What are the factors that a rational agent should depend on at any given time? 1. The performance measure that defines degree of success. 2. Ever thing that the agent has perceived so far. We will call this complete perceptual history the percept sequence. 3. When the agent knows about the environment. 4. The action that the agent can perform. 32) Define an Ideal rational agent. For each possible percept sequence, an ideal rational agent should do whatever action is expected to maximize its performance measure on the basis of the evidence provided by the percept sequence & whatever built-in knowledge that the agent has. 33) Define an agent program. Agent program is a function that implements the agents mapping from percept to actions. 34) Define Architecture.

The action program will run on some sort of computing device which is called as Architecture. 35) List the various type of agent program. Simple reflex agent program. Agent that keep track of the world. Goal based agent program. Utility based agent program 36) State the various properties of environment. Accessible Vs Inaccessible: If an agents sensing apparatus give it access to the complete state of the environment then we can say the environment is accessible to the agent. Deterministic Vs Non deterministic: If the next state of the environment is completely determined by the current state and the actions selected by the agent, then the environment is deterministic. Episodic Vs Non episodic: In this, agents experience is divided into episodes. Each episodes consists of agents perceiving and then acting. The quality of the action depends on the episode itself because subsequent episode do not depend on what action occur in previous experience. Discrete Vs Continuous: If there is a limited no. of distinct clearly defined percepts & action we say that the environment is discrete. 37) What are the phases involved in designing a problem solving agent? The three phases are: Problem formulation, Search solution, Execution. 38) What are the different types of problem? Single state problem, Multiple state problem, Contingency problem, Exploration problem. 39) Define problem. problem is really a collection of information that the agent will use to decide what to do. 40) List the basic elements that are to be include in problem definition. Initial state, operator, successor function, state space, path, goal test, path cost. 41) Mention the criteria for the evaluation of search strategy. There are 4 criteria: Completeness, time complexity, space complexity, optimality. 42) Define autonomy. If an agent relies on the prior knowledge of its designer rather than on its own percepts, we say that the agent lacks autonomy. 43) . Differentiate blind search & heuristic search. Blind search has no information about the no. of steps or the path cost from the current state to the goal, they can distinguish a goal state from nongoal state. Heuristic search-knowledge given. Problem specification solution is best. 44) List the various search strategies. a. BFS b. Uniform cost search c. DFS d. Depth limited search e. Iterative deepening search f. Bidirectional search 45) List the various informed search strategy. Best first search greedy search A* search Memory bounded search-Iterative deepening A*search simplified memory bounded A*search Iterative improvement search hill climbing simulated annealing 46) Differentiate BFS & DFS. BFS means breath wise search DFS means depth wise search Space complexity is more Space complexity is less Do not give optimal solution Gives optimal solution Queuing fn is same as that of queue operator Queuing fn is somewhat different from queue operator

47) 48) 49) 50) 51)

52)

53) 54) 55)

56)

57)

58)

59)

60)

Whether uniform cost search is optimal? Uniform cost search is optimal & it chooses the best solution depending on the path cost. Write the time & space complexity associated with depth limited search. Time complexity =O (bd) , b-branching factor, d-depth of tree Space complexity=O(bl) Define iterative deepening search. Iterative deepening is a strategy that sidesteps the issue of choosing the best depth limit by trying all possible depth limits: first depth 0, then depth 1,then depth 2& so on. What is Manhattan distance h2? The sum of horizontal and vertical distances of the tiles from their goal positions in a 15 puzzle problem is called Manhattan distance or city block distance. Define CSP A constraint satisfaction problem is a special kind of problem satisfies some additional structural properties beyond the basic requirements for problem in general. In a CSP; the states are defined by the values of a set of variables and the goal test specifies a set of constraint that the value must obey. What are the types of constraints? (i) Unary constraints relates to one variable (ii) Binary constraints relates 2 variable (iii) Higher order constraints relates more than 2 variables (iv) Absolute Constraints (v) Preference Constraints Define MRV Minimum Remaining Value heuristic chooses the variable with the fewest legal values. Define LCV. Least constraining value heuristic prefers the value that rules out the fewest choices for the neighboring variables in the constraint graph. Define Constraint Propagation. Is propagating the implication of a constraints on one variable on to the other variables as done by forward checking and then on to the constraints to detect the inconsistency. Give the drawback of DFS. The drawback of DFS is that it can get stuck going down the wrong path. Many problems have very deep or even infinite search tree. So dfs will never be able to recover from an unlucky choice at one of the nodes near the top of the tree.SoDFS should be avoided for search trees with large or infinite maximum depths. What is called as bidirectional search? The idea behind bidirectional search is to simultaneously search both forward from the initial state & backward from the goal & stop when the two searches meet in the middle. Explain depth limited search. Depth limited avoids the pitfalls of DFS by imposing a cut off of the maximum depth of a path. This cutoff can be implemented by special depth limited search algorithm or by using the general search algorithm with operators that keep track of the depth. Differentiate greedy search & A* search. Greedy Search If we minimize the estimated cost to reach the goal h(n), we get greedy search The search time is usually decreased compared to uniformed alg, but the alg is neither optimal nor complete What is Simulated Annealing? Simulated Annealing (SA) is motivated by an analogy to annealing in solids. The idea of SA comes from a paper published by Metropolis etc al in 1953 [Metropolis, 1953). The algorithm in this paper simulated the cooling of material in a heat bath. This is a process known as annealing. If you heat a solid past melting point and then cool it, the structural properties of the solid depend on the rate of cooling. If the liquid is cooled slowly enough, large crystals will be formed. However, if the liquid is cooled quickly (quenched) the crystals will contain imperfections. Metropoliss algorithm simulated the material as a system of particles. The algorithm simulates the cooling process by gradually lowering the temperature of the system until it converges to a steady, frozen state.

61) Give the procedure of IDA* search. Minimize f(n)=g(n)+h(n) combines the advantage of uniform cost search + greedy search A* is complete, optimal Its space complexity is still prohibitive. Iterative improvement algorithms keep only a single state in memory, but can get stuck on local maxima. In this alg each iteration is a dfs just as in regular iterative deepening. The depth first search is modified to use an f-cost limit rather than a depth limit. Thus each iteration expands A*search all nodes inside the contour for the current f-cost. 62) What is the advantage of memory bounded search techniques? We can reduce space requirements of A* with memory bounded alg such as IDA* & SMA*. 63) List some properties of SMA* search. It will utilize whatever memory is made available to it. It avoids repeated states as for as its memory allow. It is complete if the available memory is sufficient to store the shallowest path. It is optimal if enough memory is available to store the shallowest optimal solution path. Otherwise it returns the best solution that can be reached with the available memory. *When enough memory is available for entire search tree, the search is optimally efficient. *Hill climbing. *Simulated annealing. 64) List some drawbacks of hill climbing process. Local maxima: A local maxima as opposed to a goal maximum is a peak that is lower that the highest peak in the state space. Once a local maxima is reached the algorithm will halt even though the solution may be far from satisfactory. Plateaux: A plateaux is an area of the state space where the evaluation fn is essentially flat. The search will conduct a random walk. 65) List the various AI Application Areas natural language processing - understanding, generating, translating; planning; vision - scene recognition, object recognition, face recognition; robotics; theorem proving; speech recognition; game playing; problem solving; expert systems etc

UNIT - II 1. What are Logical agents? Logical agents apply inference to a knowledge base to derive new information and make decisions. 2.What is first-order logic? The first-order logic is sufficiently expressive to represent a good deal of our common sense knowledge. It also either subsumes or forms the foundation of many other representation languages. 3. What is a symbol? The basic syntactic elements of first-order logic are the symbols. It stands for objects, relations and functions. 4. What are the types of Quantifiers? The types of Quantifiers are, Universal Quantifiers; Existential Quantifiers. 5. What are the three kinds of symbols? The three kinds of symbols are, Constant symbols standing for objects;

Predicate symbols standing for relations; Function symbols standing for functions. 6. What is Logic? Logic is one which consist of A formal system for describing states of affairs, consisting of a)Syntax b) Semantics; Proof Theory a set of rules for deducing the entailment of set sentences. 7. Define a Sentence? Each individual representation of facts is called a sentence. The sentences are expressed in a language called as knowledge representation language. 8. Define a Proof. A sequence of application of inference rules is called a proof. Finding proof is exactly finding solution to search problems. If the successor function is defined to generate all possible applications of inference rules then the search algorithms can be applied to find proofs. 9. Define Interpretation Interpretation specifies exactly which objects, relations and functions are referred to by the constant predicate, and function symbols. 10. What are the three levels in describing knowledge based agent? The three levels in describing knowledge based agent Logical level; Implementation level; Knowledge level or epistemological level. 11. Define Syntax? Syntax is the arrangement of words. Syntax of a knowledge describes the possible configurations that can constitute sentences. Syntax of the language describes how to make sentences. 12. Define Semantics The semantics of the language defines the truth of each sentence with respect to each possible world. With this semantics, when a particular configuration exists within an agent, the agent believes the corresponding sentence. 13. Define Modus Ponens rule in Propositional logic? The standard patterns of inference that can be applied to derive chains of conclusions that lead to the desired goal is said to be Modus Ponens rule. 14. Define a knowledge Base. Knowledge base is the central component of knowledge base agent and it is described as a set of representations of facts about the world. 15. Define an inference procedure. An inference procedure reports whether or not a sentence is entitled by knowledge base provided a knowledge base and a sentence . An inference procedure i can be described by the sentences that it can derive. If i can derive from knowledge base, we can write, KB Alpha is derived from KB or i derives alpha from KB 16. What are the basic Components of propositional logic? The basic Components of propositional logic Logical Constants (True, False) Propositional symbols (P, Q) Logical Connectives (^,=,,) 17. Define AND Elimination rule in propositional logic. AND elimination rule states that from a given conjunction it is possible to inference any of the conjuncts.

1^ 2^ 3^. n i
18. Define AND-Introduction rule in propositional logic. AND-Introduction rule states that from a list of sentences we can infer their conjunctions.
_

1, 2, 3, . n 1^ 2^ 3^. n

19. What is forward chaining?

A deduction to reach a conclusion from a set of antecedents is called forward chaining. In other words, the system starts from a set of facts, and a set of rules, and tries to find the way of using these rules and facts to deduce a conclusion or come up with a suitable course of action. 20. What is backward chaining? In backward chaining, we start from a conclusion, which is the hypothesis we wish to prove, and we aim to show how that conclusion can be reached from the rules and facts in the data base. 21. What is first order logic? First order logic is sufficiently expressive to represent a good deal of our commonsense knowledge. It also either subsumes or forms the foundation of many other representation languages and has been studied intensively for many decades. 22. Define compositionality? In a compositionality language, the meaning of a sentence is a function of the meaning of its parts. For example, S1,4 ^ S1.2 is related to the meaning of S1.4 and S1.2. 23. Define objects? The elements of nouns and noun phrases refer to objects. Example people, houses and etc. 24. Define relations. The elements of verbs and verb phrases refer relations. For example properties such as red, round, prime. 25. Define functions. The relation in which there is only one value for a given input is called functions. 26. Define total function. There must be value for every input for every input tuple. 27. Define symbols in FOL. The basic syntactic elements of first-order logic are the symbols that stand for objects, relations, and functions. 28. What are the different kinds of symbols? i. Constant symbols: for objects. ii. Predicate symbols: for relations. iii. Function symbols: for functions. 29. Define semantics in FOL. The semantics must relate sentences to models in order to determine truth. 30. Define interpretation. It specifies exactly which objects, relations and functions are referred to by the constant, predicate, and function. 31. What is mean by term? A term is a logical expression that refers to an object. Ex: constant symbols. 32. Define tuples. A tuples is a collection of objects arranged in a fixed order and is written with angle brackets surrounding the objects. 33.What is atomic sentence? An atomic sentence is true in a given model, under a given interpretation, if the relation referred to by the predicate symbol holds among the objects referred to by the algorithm. 34. Define qualifiers.

Expressions the properties of entire collections of objects, instead of enumerating the object by the name are called qualifiers.First order logic contains two standard qualifiers called universal and existential. 35. Define the following. Variable: The symbols are called variables. Ground team: a term with no variables is called a ground term. 36. Define Diagnostic rules. Diagnostics rules lead from observed effects to hidden causes. 37. Define causal rules. Causal rules reflect the assumed direction of causality in the world. 38.Define knowledge Engineering. The general of process of constructing a knowledge base. A knowledge engineer is someone who investigates a particular domain, learns what concepts are important in that domain, and creates a formal representation of the objects and relations in the domain. 39. What are the steps in knowledge engineering process? i. Identify the task ii. Assemble the relevant knowledge iii. Decide on a vocabulary of predicates, functions and constants iv. Encode general knowledge about the domain v. Encode a description of the specific problem instance vi. Pose queries to the inference procedure and get and get answers vii. Debug the knowledge base 40. What are the rules to be followed for universal insanitation? The rule of universal insanitation says that we can infer any sentence obtained by substituting a ground team for variables. To write out the inference rule formally, we use the notation of substitutions. Let SUBSET (, ) denote the result of applying the substitution to the sentence . Then the rule is written as v SUBST ({v/g}, ) for any variable v and ground term g. 41. Define Existential Instantiation. For any sentence , variable v, and constant symbol k that does not appear elsewhere in the knowledge base, v SUBST ({v/k}, ) 42. Define skolem constant. Suppose we discover that there is a number that is a little bigger than 2.71828 and satisfies the equation d(xy)/dy= xy for x. we can give this new number a name, such as e, but it would be mistake to give it the name of an existing object, such as . In logic, the new name is called a skolem constant. 43.Define unification. It is a key component of all first-order inference algorithms. The unify algorithm takes two sentences and returns a unifier for them if one exists: UNIFY (p, q) = where SUBST (, p) =SUBST (, q) Ex: UNIFY (Knows (John, x), Knows (John, Jane)) = {x/Jane}. 44. Define standardizing apart. If two different sentences happen to use the same variable name, then we standardizing apart one of the two sentences being unified, which means renaming its variables to avoid name clashes.

45. What do you mean by forward chaining algorithm? Starting from the known facts, it triggers the entire rule whose premises are satisfied, adding their conclusions to the known facts. The process repeats until the query is answered or no new facts are called. 46.Define backward chaining algorithm. It is called with a list of goals containing a single element, the original query, and returns the set of all substitutions satisfying the query. The list of goals can be thought of as a stack waiting to be worked on; if all of them satisfied, then the current branch of the proofs succeed. 47. Define logic programming. Logic programming is a technology that comes fairly close to embodying the declarative idea that systems should be constructed by expressing knowledge in a formal language and that problems should be solved by running inference processes on that knowledge. 48. Define prolog. In prolog terms, there must be a finite number of solutions for any goals with unbound variables. 49.What is constraint logic programming? Constraint logic programming allows variables to be constrained rather than bound. A solution to a constraint logic program is the most specific set of constraints on the query variables that can be derived from the knowledge base. 50.What is conjunctive normal form? CNF is defined as a conjunction of clauses, where each clause is a disjunction of literals. Literals can contain variables, which are assumed to be universally quantified. 51. What is skolemization? Skolemization is the process of removing existential quantifiers by elimination. 52. Define identical and unifying Propositional factoring reduces two literals to one if they are identical. First-order factoring reduces two literals to one if they are identical. 53. Define refutation complete. Refutation-complete mean that if a set of sentences is unsatisfiable, then resolution will always be able to derive a contradiction. 54.What are the steps to be followed while preparing a problem for OTTER? A set of clauses known as ste of support, which defines the important facts about the problem. A set of usable axioms that are outside the set of support. A set of equations known as rewrites or demodulators. A set of parameters and clauses that define the control strategy.

55.Define Syntax? Syntax is the arrangement of words. Syntax of knowledge describes the possible configurations that can constitute sentences. Syntax of the language describes how to make sentences. 56.Define Semantics

The semantics of the language defines the truth of each sentence with respect to each possible world. With this semantics, when a particular configuration exists with in an agent, the agent believes the corresponding sentence. 57.Define Logic Logic is one which consist of i. A formal system for describing states of affairs, consisting of a) Syntax b)Semantics. ii. Proof Theory - a set of rules for deducing the entailment of a set sentences. 5 8. What is entailment? The relation between sentences is called entailment. The formal definition of entailment is this: kb |= if and only if in every model in which 59. What is truth preserving An inference algorithm that derives only entailed sentences is called sound or truth preserving. 60. Define a Proof A sequence of application of inference rules is called a proof. Finding proof is exactly finding solution to search problems. If the successor function is defined to generate all possible applications of inference rules then the search algorithms can be applied to find proofs. 61. Define a Complete inference procedure An inference procedure is complete if it can derive all true conditions from a set of premises. 62. Define Interpretation Interpretation specifies exactly which objects, relations and functions are referred to by the constant predicate, and function symbols. 63. Define Validity of a sentence A sentence is valid or necessarily true if and only if it is true under all Possible interpretation in all possible worlds. 64.Define Satisfiability of a sentence A sentence is satisfiable if and only if there is some interpretation in some world for which it is true. 65.Define true sentence A sentence is true under a particular interpretation if the state of affairs it represents is the case.

66.What are the basic Components of propositonal logic? i. Logical Constants (True, False) 67. Define Modus Ponen's rule in Propositional logic? The standard patterns of inference that can be applied to derive chains of conclusions that lead to the desired goal is said to be Modus Ponen's rule. 68. Define AND -Elimination rule in propositional logic AND elimination rule states that from a given conjunction it is possible to inference any of the conjuncts.
_

1^ 2^ 3^. n 1, 2, 3, . n

70. Define OR-Introduction rule in propositonal logic

i 1 2 3 . n
OR-Introduction rule states that from, a sentence, we can infer its disjunction with anything.

UNIT III PLANNING

1. Define state-space search. The most straightforward approach is to use state-space search. Because the descriptions of actions in a planning problem specify both preconditions and effects, it is possible to search in either direction: either forward from the initial state or backward from the goal 2. What are the types of state-space search? The types of state-space search are, Forward state space search; Backward state space search. 3.What is Partial-Order Planning?

A set of actions that make up the steps of the plan. These are taken from the set of actions in the planning problem. The empty plan contains just the Start and Finish actions. Start has no preconditions and has as its effect all the literals in the initial state of the planning problem. Finish has no effects and has as its preconditions the goal literals of the planning problem. 4. What are the advantages and disadvantages of Partial-Order Planning? Advantage: Partial-order planning has a clear advantage in being able to decompose problems into sub problems. Disadvantage: Disadvantage is that it does not represent states directly, so it is harder to estimate how far a partial-order plan is from achieving a goal. 5. What is a Planning graph? A Planning graph consists of a sequence of levels that correspond to time steps in the plan where level 0 is the initial state. Each level contains a set of literals and a set of actions. 6. What is Conditional planning? Conditional planning is also known as contingency planning, conditional planning deals with incomplete information by constructing a conditional plan that accounts for each possible situation or contingency that could arise 7. What is action monitoring? The process of checking the preconditions of each action as it is executed, rather than checking the preconditions of the entire remaining plan. This is called action monitoring. 8. Define planning. Planning can be viewed as a type of problem solving in which the agent uses beliefs about actions and their consequences to search for a solution. 9. List the features of an ideal planner? The features of an ideal planner are, The planner should be able to represent the states, goals and actions; The planner should be able to add new actions at any time; The planner should be able to use Divide and Conquer method for solving very big problems. 10. What are the components that are needed for representing an action? The components that are needed for representing an action are, Action description; Precondition; Effect. 11. What are the components that are needed for representing a plan? The components that are needed for representing a plan are, A set of plans steps; A set of ordering constraints; A set of variable binding constraints; A set of casual link protection. 12. What are the different types of planning? The different types of planning are, Situation space planning; Progressive planning;

Regressive planning; Partial order planning; Fully instantiated planning. 13. Define a solution. A solution is defined as a plan that an agent can execute and that guarantees the achievement of goal. 14. Define complete plan and consistent plan. A complete plan is one in which every precondition of every step is achieved by some other step. A consistent plan is one in which there are no contradictions in the ordering or binding constraints. 15. What are Forward state-space search and Backward state-space search? Forward state-space search: It searches forward from the initial situation to the goal situation. Backward state-space search: It searches backward from the goal situation to the initial situation. 16. What is Induction heuristics? What are the different types of induction heuristics? Induction heuristics is a method, which enable procedures to learn descriptions from positive and negative examples. There are two different types of induction heuristics. They are: Require-link heuristics. Forbid-link heuristics. 17. Define Reification. The process of treating something abstract and difficult to talk about as though it were concrete and easy to talk about is called as reification. 18. What is reified link? The elevation of a link to the status of a describable node is a kind of reification. When a link is so elevated then it is said to be a reified link. 19. Define action monitoring. The process of checking the preconditions of each action as it is executed, rather than checking the preconditions of the entire remaining plan. This is called action monitoring. 20. What is meant by Execution monitoring? Execution monitoring is related to conditional planning in the following way. Anagent that builds a plan and then executes it while watching for errors is, in a sense, taking into account the possible conditions that constitute execution errors.

UNIT IV UNCERTAINITY 1. Define Uncertainty. Uncertainty means that many of the simplifications that are possible with deductive inference are no longer valid. 2. State the reason why first order, logic fails to cope with that the mind like medical diagnosis. Three reasons: Laziness: It is hard to lift complete set of antecedents of consequence, needed to ensure and exception less rule. Theoretical Ignorance: Medical science has no complete theory for the domain. Practical ignorance: Even if we know all the rules, we may be uncertain about a particular item needed. 3. What is the need for probability theory in uncertainty? Probability provides the way of summarizing the uncertainty that comes from our laziness and ignorance. Probability statements do not have quite the same kind of semantics known as evidences. 4. What is the need for utility theory in uncertainty? Utility theory says that every state has a degree of usefulness, or utility to In agent, and that the agent will prefer states with higher utility. The use utility theory to represent and reason with preferences. 5. What Is Called As Decision Theory? Preferences As Expressed by Utilities Are Combined with Probabilities in the General Theory of Rational Decisions Called Decision Theory. Decision Theory = Probability Theory + Utility Theory. 6. Define conditional probability? Once the agents has obtained some evidence concerning the previously unknown propositions making up the domain conditional or posterior probabilities with the notation p(A/B) is used. This is important that p(A/B) can only be used when all be is known. 7. When probability distribution is used? If we want to have probabilities of all the possible values of a random variable probability distribution is used. Eg: P(weather) = (0.7,0.2,0.08,0.02). This type of notations simplifies many equations. 8. What is an atomic event? An atomic event is an assignment of particular values to all variables, in other words, the complete specifications of the state of domain. 9. Define joint probability distribution. Joint probability distribution completely specifies an agent's probability

assignments to all propositions in the domain. The joint probability distribution p(x1,x2,-------xn) assigns probabilities to all possible atomic events; where x1,x2------xn=variables. 10. What is meant by belief network? A belief network is a graph in which the following holds A set of random variables A set of directive links or arrows connects pairs of nodes. The conditional probability table for each node The graph has no directed cycles. 11. What are called as Poly trees? The algorithm that works only on singly connected networks known as Polytrees. Here at most one undirected path between any two nodes is present. 12. What is a multiple connected graph? A multiple connected graph is one in which two nodes are connected by more than one path. 13. List the three basic classes of algorithms for evaluating multiply connected graphs. The three basic classes of algorithms for evaluating multiply connected graphs Clustering methods; Conditioning methods; Stochastic simulation methods. 14. What is called as principle of Maximum Expected Utility (MEU)? The basic idea is that an agent is rational if and only if it chooses the action that yields the highest expected utility, averaged over all the possible outcomes of the action. This is known as MEU 15. What is meant by deterministic nodes? A deterministic node has its value specified exactly by the values of its parents, with no uncertainty. 16. What are all the uses of a belief network? The uses of a belief network are, Making decisions based on probabilities in the network and on the agent's utilities; Deciding which additional evidence variables should be observed in order to gain useful information; Performing sensitivity analysis to understand which aspects of the model have the greatest impact on the probabilities of the query

variables (and therefore must be accurate); Explaining the results of probabilistic inference to the user. 17. Give the Baye's rule equation W.K.T P(A ^ B) = P(A/B) P(B) 1 P(A ^ B) = P(B/A) P(A) 2 DIVIDING BYE P(A) ; WE GET P(B/A) = P(A/B) P(B) -------------------P(A) 18. What is called as Markov Decision problem?

The problem of calculating an optimal policy in an accessible, stochastic environment with a known transition model is called a Markov Decision Problem(MDP).

19. Define Dynamic Belief Network. A Belief network with one node for each state and sensor variable for each time step is called a Dynamic Belief Network.(DBN).

20. Define Dynamic Decision Network? A decision network is obtained by adding utility nodes, decision nodes for action in DBN. DDN calculates the expected utility of each decision sequence.

UNIT V LEARNING
1. What is meant by learning? Learning is a goal-directed process of a system that improves the knowledge or the knowledge representation of the system by exploring experience and prior knowledge. 2. Define informational equivalence and computational equivalence. A transformation from on representation to another causes no loss of information; they can be constructed from each other. The same information and the same inferences are achieved with the same amount of effort. 3. Define knowledge acquisition and skill refinement. knowledge acquisition (example: learning physics)learning new symbolic information coupled with the ability to apply that information in an effective manner skill refinement (example: riding a bicycle, playing the piano)occurs at a subconscious level by virtue of repeated practice 4. What is Explanation-Based Learning? The background knowledge is sufficient to explain the hypothesis of Explanation- Based Learning. The agent does not learn anything factually new from the instance. It extracts general rules from single examples by explaining the examples and generalizing the explanation. 5. Define Knowledge-Based Inductive Learning. Knowledge-Based Inductive Learning finds inductive hypotheses that explain set of observations with the help of background knowledge. 6. What is truth preserving? An inference algorithm that derives only entailed sentences is called sound or truth preserving. 7. Define Inductive learning. How the performance of inductive learning algorithms can be measured? Learning a function from examples of its inputs and outputs is called inductive learning. It is measured by their learning curve, which shows the prediction accuracy as a function of the number of observed examples. 8. List the advantages of Decision Trees The advantages of Decision Trees are, It is one of the simplest and successful forms of learning algorithm. It serves as a good introduction to the area of inductive learning and is easy to implement. 9. What is the function of Decision Trees? A decision tree takes as input an object or situation by a set of properties, and outputs a yes/no decision. Decision tree represents Boolean functions. 10. List some of the practical uses of decision tree learning. Some of the practical uses of decision tree learning are, Designing oil platform equipment Learning to fly 11.What is the task of reinforcement learning? The task of reinforcement learning is to use rewards to learn a successful agent function. 12. Define Passive learner and Active learner.

A passive learner watches the world going by, and tries to learn the utility of being in various states. An active learner acts using the learned information, and can use its problem generator to suggest explorations of unknown portions of the environment. 13. State the factors that play a role in the design of a learning system. The factors that play a role in the design of a learning system are, Learning element Performance element Critic Problem generator 14. What is memorization? Memorization is used to speed up programs by saving the results of computation. The basic idea is to accumulate a database of input/output pairs; when the function is called, it first checks the database to see if it can avoid solving the problem from scratch. 15. Define Q-Learning. The agent learns an action-value function giving the expected utility of taking a given action in a given state. This is called Q-Learning. 16. Define supervised learning & unsupervised learning. Any situation in which both inputs and outputs of a component can be perceived is called supervised learning. Learning when there is no hint at all about the correct outputs is called unsupervised learning. 17. Define Bayesian learning. Bayesian learning simply calculates the probability of each hypothesis, given the data, and makes predictions on that basis. That is, the predictions are made by using all the hypotheses, weighted by their probabilities, rather than by using just a single best hypothesis. 18. What is utility-based agent? A utility-based agent learns a utility function on states and uses it to select actions that maximize the expected outcome utility. 19. What is reinforcement learning? Reinforcement learning refers to a class of problems in machine learning which postulate an agent exploring an environment in which the agent perceives its current state and takes actions. The environment, in return, provides a reward (which can be positive or negative).Reinforcement learning algorithms attempt to find a policy for maximizing cumulative reward for the agent over the curse of the problem. 20. What is the important task of reinforcement learning? The important task of reinforcement learning is to use rewards to learn a successful agent function.

SIXTEEN MARKS UNIT 1: PROBLEM SOLVING


1. Explain Agents in detail. An agent is anything that can be viewed as perceiving its environment through sensors and SENSOR acting upon that environment through actuators. Percept We use the term percept to refer to the agent's perceptual inputs at any given instant. Percept Sequence An agents percept sequence is the complete history of everything the agent has ever perceived. Agent function Mathematically speaking, we say that an agent's behavior is described by the agent function Properties of task environments Fully observable vs partially observable; Deterministic vs stochastic; Episodic vs sequential; Static vs dynamic; Discrete vs continuous; Single agent vs multi agent. 2. Explain uninformed search strategies. Uninformed Search Strategies have no additional information about states beyond that provided in the problem that knows whether one non goal state is more promising than another are called informed search or heuristic search strategies. There are five uninformed search strategies as given below. Breadth-first search; Uniform-cost search; Depth-first search; Depth-limited search; Iterative deepening search. 3. Explain informed search strategies. Informed search strategy is one that uses problem-specific knowledge beyond the definition of the problem itself. It can find solutions more efficiently than uninformed strategy. Best-first search; Heuristic function; Greedy-Best First Search(GBFS); A* search; Memory Bounded Heuristic Search. 4. Explain CSP in detail. A constraint satisfaction problem is a special kind of problem satisfies some additional structural properties beyond the basic requirements for problem in general. In a CSP, the states are defined by the values of a set of variables and the goal test specifies a set of constraint that the value must obey. CSP can be viewed as a standard search problem as follows: Initial state: the empty assignment {}, in which all variables are unassigned. Successor function: a value can be assigned to any unassigned variable, provided that it does not conflict with previously assigned variables. Goal test: the current assignment is complete. Path cost: a constant cost for every step.

Varieties of CSPs: Discrete variables. CSPs with continuous domains. Varieties of constraints : Unary constraints involve a single variable. Binary constraints involve pairs of variables. Higher order constraints involve 3 or more variables. UNIT 2: LOGICAL REASONING 1. Explain Reasoning patterns in propositional logic with example. Modus ponens; AND elimination; OR elimination; AND introduction; Resolution; Unit resolution; Double negation. 2. Explain in detail about knowledge engineering process in FOL. Identify the task; Assemble the relevant knowledge; Decide on a vocabulary of predicates, constraints and functions; Encode general knowledge about the domain; Encode a description of a specific problem; Pose queries; Debug the knowledgebase. 3. Discuss in detail about unification and lifting. Unification; Generalized modus ponens; Lifting. 4. Explain in detail about forward and backward chaining with example. Example; Efficient forward chaining; Incremental forward chaining; Backward chaining; Logic programming. 5. What is resolution? Explain it in detail. Definition; Conjunctive Normal Form; Resolution interference rule.

UNIT 3: PLANNING
1. Explain partial order planning. Partial-Order Planning; A partial-order planning example; Heuristics for partial-order planning. 2. Discuss about planning graphs in detail. Planning graphs for heuristic estimation; The GRAPHPLAN algorithm;

Termination of GRAPHPLAN. 3. Explain planning with State-Space Search in detail. Forward State-Space Search; Backward State-Space Search; Heuristics for State-Space Search. 4. Describe Hierarchical Task Network Planning in detail. Representing action decomposition; Modifying the planner for decompositions. 5. Explain conditional planning in detail. Conditional planning in fully observable environment; Conditional planning in partially observable environment. UNIT 4: UNCERTAIN KNOWLEDGE AND REASONING 1. Explain Bayesian Network in detail. Semantics of Bayesian network; Representing the full joint distribution, Conditional independence relation in Bayesian networks. Exact interference in Bayesian network; Interference by enumeration, The variable elimination algorithm, The complexity of exact inference, Clustering algorithm. Approximate interference in Bayesian network; Direct sampling methods, Interference by Markov chain simulation. 2. Discuss about Hidden Markov Model in detail. The Hidden Markov Model or HMM is a temporal probabilistic model in which the state of the process is described by a single discrete random variable. The possible value of the variable is the possible states. The HMM is used in speech recognition. Simplified matrix algorithms. 3. Explain inference in Temporal Model. Filtering and prediction; Smoothing; Finding the most likely sequence. 4. Discuss in detail about uncertainty. Acting under uncertainty; Basic probability notation; The axioms of probability; Inference using Full Joint Distribution; Independence; Bayes Rule and its use; The Wumpus World Revisited. 5. Explain Basic Probability Notation and Axioms of Probability in detail. Basic Probability Notation;

Propositions, Atomic events, Prior probability, Conditional probability. Axioms of probability; Using the axioms of probability, Why the axioms of probability are reasonable.

UNIT 5: LEARNING 1. Explain about Learning Decision Trees in detail. Decision tree as performance elements; Expressiveness of decision trees; Inducing decision trees from examples; Choosing attribute tests; Assessing the performance of the learning algorithm; Noise and overfitting; Broadening the applicability of decision trees. 2. Explain Explanation Based Learning in detail. The background knowledge is sufficient to explain the hypothesis of Explanation- Based Learning. The agent does not learn anything factually new from the instance. It extracts general rules from single examples by explaining the examples and generalizing the explanation. Extracting general rules from examples; Improving efficiency. 3. Explain Learning with complete data in detail. Discrete model; Naive Bayes models; Continuous model; Bayesian parameter learning; Learning Bayes net structures. 4. Explain neural networks in detail. Units in neural networks; Network structures; Single layer feed-forward neural networks; Multilayer feed-forward neural networks; Learning neural network structures. 5. Explain Reinforcement Learning in detail. Reinforcement learning refers to a class of problems in machine learning which postulate an agent exploring an environment in which the agent perceives its current state and takes actions. The environment, in return, provides a reward (which can be positive or negative). Reinforcement learning algorithms attempt to find a policy for maximizing cumulative reward for the agent over the curse of the problem. Passive reinforcement learning; Active reinforcement learning; Generalization in reinforcement learning; Policy search.

Vous aimerez peut-être aussi