Vous êtes sur la page 1sur 8

An Overview of Abduction as a General Framework for Knowledge-Based Systems

Tim Menzies y June 12, 1995


Abstract

A single inference procedure (abduction) can be used for (amongst other things) prediction, classication, explanation. planning, monitoring, diagnosis, qualitative reasoning, validation, veri cation, diagrammatic reasoning, multiple-expert knowledge acquisition and decision support systems.

1 Introduction

Informally, abduction is inference to the best explanation 30]. Given , , and the rule R1 : ` , then deduction is using the rule and its preconditions to make a conclusion ( ^ R1 ) ); induction is learning R1 after seeing numerous examples of and ; and abduction is using the postcondition and the rule to assume that the precondition could be true ( ^ R1 ) ) 21]. More formally, abduction is the search for assumptions A which, when combined with some theory T achieves some goal G without causing some contradiction 8]. That is:
EQ1: EQ2:

T A`G T A 6`?

In this paper, we explore the implementation of the HT4 abductive algorithm 26] to conclude that abduction is a general framework for a wide-range of KBS tasks. In particular, we will argue here that abduction can be used for prediction, classi cation, explanation. planning, monitoring, diagnosis, qualitative reasoning, validation, veri cation, diagrammatic reasoning, multiple-expert knowledge acquisition and decision support systems. We argue elsewhere that abduction could model certain interesting features of human cognition 27]. Others argue elsewhere that abduction is also a framework for natural-language processing 29], design 35], visual pattern recognition 36], analogical reasoning 9], nancial reasoning 14], machine learning 15, 31] and case-based reasoning 20]. Due to space limitations, this article will be an overview only (a longer version is in preparation).

2 About HT4
2.1 De nitions
To execute HT4, the user must supply a theory T comprising a set of uniquely labeled statements S x . The dependency graph connecting literals in T is an and-or graph comprising << V and ; V or >; E ; I >; i.e. a set of directed edges E connecting vertices V containing invariants I . I is de ned in the negative; i.e. :I means that no invariant violation has occurred (e.g. :I (p; :p)). Each edge E x and vertex V y is labeled with the S z that generated it. HT4 extracts subsets of E which are relevant to some user-supplied T ASK. Each T ASKx is a triple < IN ; OUT ; BEST > . Each task comprises some OUT puts to be reached, given some IN put
y Department of

Submitted to the Australian AI '95 conference. Software Development, Monash University, Melbourne, VIC.; timm@insect.sd.monash.edu.au

(OUT V and IN V ). IN can be either be a member of the known FACT S or a DEFAULT belief which we can assume if it proves convenient to do so. Typically, FACT S = IN OUT . If there is more than one way to achieve the T ASK, then the BEST operator selects the preferred way(s). To reach a particular output OUT z 2 OUT , we must nd a proof tree P x using vertices P used whose x single leaf is OUT y and whose roots are from IN (denoted P roots IN ). All immediate parent vertices x of all V and 2 P used must also appear in P used . One parent of all V or 2 P used must also appear in P used y x x y x x unless V or 2 IN (i.e. is an acceptable root of a proof). No subset of P used may contradict the FACT S ; y x e.g. for invariants of arity 2:

:(V y 2 P used ^ V z 2 FACT S ^ I (V y ; V z )) x Maximal consistent subsets of P (i.e. maximal with respect to size, consistent with respect to I ) are grouped together into worlds W (W i E ). Each world W i contains a consistent set of beliefs that are relevant to the T ASK. The union of the vertices used in the proofs of W i is denoted W used. For any i world W i , W causes are the members of IN found in W i (W causes = W used \ IN ). The achievable or i i i covered goals G in W i are the members of OUT found in that world (W covered = W used \ OUT ). i i The union of the vertices used in all proofs that are not from the FACT S is the HT4 assumption set Aall ; i.e. 1 0 o n Aall = @ V y 2 P used A ? FACT S x V The union of the subsets of Aall which violate I are the controversial assumptions AC : AC = fV x 2 Aall ^ V y 2 Aall ^ I (V x ; V y )g V Within a proof P y the preconditions for V y 2 P used are the transitive closure of all the parents of x V y in that proof. The base controversial assumptions (AB ) are the controversial assumptions which have
y x

no controversial assumptions in their preconditions (i.e. are not downstream of any other controversial assumptions). In terms of separating the proofs into worlds, AB are the crucial assumptions. We call the maximal consistent subsets of AB the environments ENV (ENV i AB AC Aall V ). The union of the proofs that do not contradict ENV i is the world W i . In order to check for non-contradiction, we compute the exclusions set X . X i are the base controversial assumptions that are inconsistent with ENV i. A proof P j belongs in world W i if it does not use any member of X i (the excluded assumptions of that world); i.e. o n W i = P used \ X i = ; j Note that each proof can exist in multiple worlds. In the case of more than one world, the BEST criteria is applied to select the subset of W which the user wants to see. Numerous BEST s can be found in the literature; e.g. the BEST worlds are the one which contain (i) the most speci c proofs (i.e. largest size) 12]; the fewest causes 38]; the most number of speci c concepts 33]; the largest subset of E 29]; the largest number of covered outputs 28]; the most number of edges that model processes which are familiar to the user 32]; the most number of edges that have been used in prior acceptable solutions 20]; or the least number of edges coming from S x statements written by researchers you are currently feuding with. Our view is that BEST is domain speci c; i.e. we believe that their is no best BEST . Given certain renamings, HT4 satis es the de nition of abduction given in the introduction (see EQ1 ; EQ2). HT4-style abduction is the search for (i) a subset of E called W i , (ii) a subset of IN called W causes, i (iii) a subset of OUT called W covered , and (iv) a subset of V called ENV i such that: i
EQ1:1:

2.2 HT4 is Abduction

W i ^ W causes ^ ENV i ` W covered i i

In the case where multiple worlds can be generated, the BEST operator decides which world(s) to show to the user. HT4 does not try to explain all of G ; rather, it explains what it can (W covered ). Further, the assumpi tions A found by HT4 are the useful inputs (W causes), some assumptions about intermediaries between i the useful inputs and the covered outputs (ENV i ), and the edges relevant to a particular T ASK (W used). i The core computational problem of HT4 is the search for X i . Earlier versions of HT4 10, 11, 23] computed the BEST worlds W via a basic depth- rst search chronological backtracking algorithm (DFS) with no memoing. These systems took days to terminate 26]. Mackworth 22] and DeKleer 6] warn that DFS can learn features of a search space, then forget it on backtracking. Hence, it may be doomed to waste time re-learning those features later on. One alternative to chronological backtracking is an algorithm that caches what it learns about the search space as it executes. HT4 runs in four \sweeps" which learn and cache features of the search space as it executes. Facts sweep: In the case where < V ; E > is pre-enumerated and cached and I has an arity of 2, a hash table NOGOOD can be built in O(jV j2) time that maps every vertex to the set of vertices that it is incompatible with. Once NOGOOD is known, the facts sweep can cull all V x that are inconsistent with the FACT S in time O(jV j). Note: a simplifying assumption made by HT4 is that NOGOOD s are only de ned for V or vertices (i.e. the NOGOOD sets for V and are empty). Forwards sweep: AC is computed as a side-e ect of forward chaining from IN (ignoring I ) to nd IN , the vertices reachable from IN . In the worst case, nding IN is transitive closure (i.e. O(jV j3). Once IN is known, AC can be found in time O(jV j)1 . Backwards sweep: AB is computed as a side-e ect of growing proofs back from OUT across IN . Each proof P y contains it's forbids set (the vertices that, with P used , would violate I ), and the uppery most AC (called the proof guess) found during proof generation. The backwards sweep handles V or vertices di erently to V and vertices: A candidate V or for inclusion in P used must satisfy V or 62 P used (loop detection) and V or 62 P forbids x y x y x y (consistency check). If added, the vertices that are NOGOOD with V or are added to P forbids . x y

EQ2:1:

W i ^ W causes ^ ENV i ^ W covered ^ :I i i

2.3 Implementation

After checking for looping, a candidate V and that seeks inclusion in P used must check all combinax y tions of all proofs which can be generated from its parents. The cross-product of the proofs from the V and parent vertices is calculated (which implies a recursive call to the backwards sweep for x each parent, then collecting the results in a temporary). The proofs in i plus P and are combined to y form the single proof i0 . Proof combination generates a new proof whose used, forbids and guesses sets are the union of these sets from the combining proofs. A combined proof is said to be valid if the used set does not intersect with the forbids set. Each valid i0 represents one use of V and to x connect an OUT vertex to the IN set.

After all the proofs are generated, the union of all the proof guess sets is AB . If the average size of a proof is N and the average fanout of the graph is F , then worse case backwards sweep is O(N F ). Worlds Sweep: HT4 assumes that its V or are generated from attributes with a nite number of mutually exclusive discrete states (e.g. fday=mon, day=tues,: : : g). With this assumption, the generation of ENV i is just the cross product of all the used states of all the attributes found in AB . From ENV i we can nd X i in linear time (using NOGOOD s). The worlds sweep is simply two nested loops over each X i and each P j (i.e. O(jXj jPj)). Somewhere within the above process, the BEST criteria must be applied to cull unwanted worlds. HT4 applies BEST after world generation. There is no reason why certain BEST s could not applied earlier;
1

2.4 Applying BEST

Set manipulations are done using bitstrings (so set membership can be computed in constant time).

e.g. during proof generation. For example, if it is known that BEST will favour the worlds with smallest path sizes between inputs and goals, then a beam-search style BEST operator could cull excessively long proofs within the generation process. More generally, we characterise BEST s into the information they require before they can run: the highest probability. Proof-level assessment operators can execute when some proofs or partial proofs are known; e.g. beam search. Favoring the world(s) that cover (e.g.) the greatest number of outputs (BEST 4 ) is a worlds-level assessment operator which cannot execute till all the worlds are generated. While the complexity of BEST is operator speci c, we can make some general statements about the computational cost of BEST . Vertex or proof-level assessment reduce the O(N F ) complexity of the backwards sweep (since not all paths are explored). Worlds-level assessment is a search through the entire space that could be relevant to a certain task. Hence, for fast runtimes, do not use worldslevel assessment. However, for some tasks (e.g. the KBS validation task discussed below), worlds-level assessment is unavoidable. Abduction has a reputation of being impractically slow 8]. Selman & Levesque show that even when only one abductive explanation is required and T is restricted to an acyclic theories, then abduction is NP-hard 42]. Bylander et. al. make a similar pessimistic conclusion 3]. In practice these theoretical restrictions may not limit application development. Ng & Mooney report reasonable runtimes using a beam-search proof-level assessment operator 29]. Figure 1 shows the average runtime in sections for executing HT4 using worlds-level assessment (BEST 4) over 94 and-or graphs and 1991 < IN ; OUT > pairs 24]. For that study, a \give up" time of 14 minutes was built into the test engine. HT4 did not terminate for jVj 850 (shown in Figure 1 as a vertical line). 1500 1200 Runtime 900 (seconds) 600 300 0
Vertex-level assessment operators can execute at the local-propagation level; e.g. use the edges with

2.5 Practicality

0 200 400 600 800 1000


Size (jVj)

Figure 1: Average runtimes In practice, how restrictive is a limit of 850 vertices? Details of the nature of real-world expert systems are hard to nd in the literature. The only data we could nd is shown in Figure 2 which shows the size of the dependency graph between literals in elded propositional expert systems 37]. Figure 2 suggests that a practical inference engine must work for the range 55 jVj 510. Note that the Figure 1 results were obtained from a less-than-optimum platform: Smalltalk/V on a PowerBook170 (a port to \C" on a Sparc station is currently in progress). However, the current results on a relatively slow platform show that even when we run HT4 slowly (i.e. using BEST 4, a worlds-level assessment), it is practical for the theory sizes we see in practice.

Application displan mmu tape neuron DMS-1

55 65 80 155 510

jVj

Figure 2: Figures from elded expert systems

V ? IN ; i.e. nd all vertices we can reach from the inputs. This is a non-naive implementation of prediction since mutually exclusive predictions will be found in di erent worlds. Note that in the special case where IN are all root vertices in the graph and OUT = V ? IN , then our abductive system will compute ATMS-style 6] total envisionments; i.e. all possible consistent worlds that are extractable from the theory. A more e cient case is that IN is smaller than all the roots of the graph and some interesting subset of the vertices have been identi ed as possible reportable outputs (i.e. OUT V ? IN ). Classi cation is just a special case of prediction. Given a dependency graph that connects classes to their attributes and their super-classes, OUT is just the vertices that are classes. BEST could also favours the worlds that include the more speci c classes 33]. Wick and Thompson report that the current view of explanation is more elaborate than merely \print the rules that red" or the \how" and \why" queries of MYCIN 44]. Explanation is now viewed as an inference procedure in its own right rather than a pretty-print of some ltered trace of the proof tree. In the current view, explanations should be customised to the user and the task at hand. For example, Paris describes an explanation algorithm that switches from process-based explanations to parts-based explanations whenever the explanation procedure enters a region which the user is familiar with 32]. This current view of explanation can be modeled as abduction. Given a user pro le listing the vertices familiar to the user and the edges representing processes that the user is aware of, then BEST EXPLANATION favours the worlds with the largest intersection to this user pro le. We characterise planning as the generation of N plans (worlds) that achieve some goal(s) while satisfying a BEST criteria such as minimum cost. These plans are then passed to a monitoring process which reviews the possible plans as new data comes to light. Plans that use literals which are inconsistent with new data are rejected. The remaining plans represent the space of possible ways to achieve the desired goals in the current situation. If all plans are rejected, then HT4 is run again using all the available data. Parsimonious set-covering diagnosis 38] uses a BEST that favors worlds that explain the most things, with the smallest number of diseases (i.e. maximise W x \ OUT and minimise W x \ IN ). Set-covering diagnosis is best for fault models and causal reasoning 18]. The opposite of set-covering diagnosis is consistency-based diagnosis 5, 7, 13, 34, 40] where all worlds consistent with the current observations are generated. Computationally, this is equivalent to the prediction process described above, with small variants. For example, in Reiter's variant on consistency-based diagnosis 40], all predicates relating to the behaviour of a model component V x assume a test that V x in not acting AB normally; i.e. :AB (V x ). BEST Reiter is to favour the worlds that contain the least number of AB assumptions. A related task to diagnosis is probing. When exploring di erent diagnosis, an intelligent selection of tests (probes) can maximise the information gain while reducing the testing cost 7]. In HT4, we would know to favour probes of AB over probes of AC over probes of non-controversial assumptions. HT4 was originally developed as a tool for processing qualitative compartmental models in neuroendocrinology 10, 11]. Qualitative reasoning is the study of systems whose numeric values are replaced by one of three qualitative states: up, down or steady 16]. A fundamental property of such systems is their indeterminacy. In the case of competing qualitative in uences, three possible results are up, down or steady. These alternatives and their consequences must be considered separately. Abduction can maintain these alternatives in separate worlds.
5

3 Applications

Prediction is implemented by calling HT4 with OUT

KBS validation tests a theory's validity against external semantic criteria. Given a library of known behaviours (i.e. a set of pairs < IN ; OUT >), abductive validation uses a BEST that favours the worlds with largest number of covered outputs (i.e. maximise IN \ W x ) 28]. KBS veri cation tests a theory's validity against internal syntactic criteria. HT4 could be used for numerous KBS veri cation tests 26]. For example: Circularities could be detected by computing the transitive closure of the and-or graph. Ambivalence (a.k.a. inconsistency) could be reported if more than one world can be generated. Un-usable rules could be detected if the edges from the same S x statement in the knowledge base touch vertices that are incompatible (de ned by I ). We prefer external semantic criteria to internal syntactic criteria since we know of elded expert systems that contain syntactic anomalies, yet still perform adequately 37]. Diagrammatic reasoning: Informal vague causal diagrams (VCD s) are typically viewed as a precursor to other modeling techniques. We have argued elsewhere that abduction can process these diagrams directly without having to request more information from the expert(s). We say that a VCD is understood i we extract from it a deductive theory that can explain some of known behaviour without also entailing inconsistencies. Further, we understand that VCD i is better than VCD j i the deductive theories extracted from VCD i can explain more known behaviour that the deductive theories we can extract from VCD j . This process maps directly into HT4. The extracted theory is a world. Decisions regarding which is the better theory is merely the application of the above validation algorithm. This diagrammatic reasoning technique is also a general model of multiple-expert knowledge acquisition. Given (i) a community of feuding experts who have generated (ii) a set of competing theories and (iii) a library of known behaviour, then we can resolve the feud by rejecting the theories that explain a signi cantly lower percentage of the known behaviour. Abduction is also a general framework for decision support systems (DSS). For example, a standard DSS function is a \what-of" query in which users explore hypothetical options. Implementing such a query over a multiple-worlds architecture such as HT4 would be trivial. More precisely, when we look into the internals of a DSS, we see sub-routines that use many of the inference procedures described above. For example, Brookes de nes a DSS as (i) the generation/ evaluation/ selection of alternative solutions to (ii) detected and diagnosed problems followed by (iii) the monitoring of the selected solution 2]. Brookes' \detection" is equivalent to set-covering diagnosis. Abduction is an appropriate tool for the generation of alternatives while the BEST operators control evaluation and selection. Diagnosis and monitoring were discussed above. Further, HT4 o ers a principled approach for group DSS in which groups attempt to build a consensus model of their domain. Such models do not need to be fully speci ed before they can be assessed. Using abduction, executables can be generated from half-formed ideas to give the group feedback on their ideas. The above multiple-expert knowledge acquisition strategy can be used to cull not-so-promising ideas developed within the group.

4 Related Work

HT4 is a general abductive engine for nite domains. First-order abduction is discussed in 17, 8, 15, 36]. We focus on nite theories since we are very concerned with e ciency. HT4 could process rst-order theories, but these would have to rst be unfolded to a ground state. The relevant edges of an HT4 world are a subset of the reachable edges from IN since only the edges on proofs that lead to OUT are used. Hence, a HT4 world is a subset of a Reiterian extension from default logic 39]. Our proposal for a single internal representation for expert systems (<< V and ; V or >; E ; I >) and a single inference procedure (abduction) for their processing is similar to the SOAR project 41]. SOAR is a state-space traversal using local knowledge to select appropriate operators to transform the current state into the next state. Knowledge in SOAR is represented in rules using a RETE-like forward chaining algorithm (though, with much more control over con ict resolution). A SOAR state is of a larger-grain size than a HT4 V x vertex. SOAR's knowledge is essentially vertex-level BEST knowledge, though some work has been done in proof-level BEST assessment 19]. SOAR seems to have some problems with 6

worlds-level assessment. Experiments with abduction in SOAR (called antecedent derivation) required the use of a separate abductive inference engine 43]. Our claim is that a general abductive inference engine can replace rather than merely augment existing KBS inference systems. Similarly,we reject DeKleer's separation of a problem solver into an inference engine and an assumptionbased truth maintenance system 6]. While such a split may be pragmatically useful for procedural inference engines, we argue that a comprehensive picture of declarative inference views the abductive ATMS process as a general description of expert systems processing. Abduction is not a useful sub-routine within KBS inference; it is KBS inference. The knowledge-level-B modeling (KLB ) community (which includes KADS 45] but excludes the SOAR knowledge-level-A community) propose implementation independent principles for the organisation of expert systems 25]. The KLB community has nearly discovering abduction several times. For example, Clancey's distinction between heuristic classi cation and heuristic construction 4] is the difference between abduction with no invariants (where every combination of proof is legal) and abduction with invariants (where proofs compete with each other). Also, Brueker's recent discussion of components of solutions 1] sounds to us like three recursive calls to a single inference procedure. Breuker claims that di erent expert systems inference procedures can be interfaced to each other if we recognise that they all contain the same four components of solutions: an argument structure which is extracted from a conclusion which is in turn extracted from a case model which is in turn extracted from a generic domain model. Note that, in all cases, each sub-component is generated by extracting a relevant subset of some background theory to generate a new theory (i.e. abduction). We prefer our abductive approach to KLB for two reasons. Firstly, once a KBS has been modeled abductively, then we can execute it directly. Methodologies such as KADS require a subsequent implementation phase. Secondly, we have argued elsewhere that the limits to KBS validation are really the limits to KBS construction since we should not use heuristic models that have not been tested 24]. An abductive KBS gets a validation engine for free (see above). Such validation engines are extra work in a KLB system. Hence, in the usual case, they are not built.

5 Conclusion References

Theoretically, abduction is a unifying principle for expert systems. Preliminary experiments suggest that abduction is also a practical tool for unifying expert system construction. Our current research goal is to broaden that experimental base.
1] J. Breuker. Components of problem solving and types of problems. In 8th European Knowledge Acquisition Workshop, EKAW '94, pages 118{136, 1994. 2] C.H.P. Brookes. Requirements elicitation for knowledge based decision support systems. Technical Report 11, Information Systems, University of New South Wales, 1986. 3] T. Bylander, D. Allemang, M.C. M.C. Tanner, and J.R. Josephson. The computational complexity of abduction. Arti cial Intelligence, 49:25{60, 1991. 4] W. Clancey. Heuristic classi cation. Arti cial Intelligence, 27:289{350, 1985. 5] L. Console and P. Torasso. A spectrum of de nitions of model-based diagnosis. Computational Intelligence, 7:133{141, 3 1991. 6] J. DeKleer. An assumption-based TMS. Arti cial Intelligence, 28:163{196, 1986. 7] J. DeKleer and B.C. Williams. Diagnosing multiple faults. Arti cial Intelligence, 32:97{130, 1 1987. 8] K. Eshghi. A tractable class of abductive problems. In IJCAI '93, volume 1, pages 3{8, 1993. 9] B Falkenhainer. Abduction as similarity-driven explanation. In P. O'Rourke, editor, Working Notes of the 1990 Spring Symposium on Automated Abduction, pages 135{139, 1990. 10] B.Z. Feldman, P.J. Compton, and G.A. Smythe. Hypothesis testing: an appropriate task for knowledge-based systems. In 4th AAAI-Sponsored Knowledge Acquisition for Knowledge-based Systems Workshop Ban , Canada, October 1989, 1989. 11] B.Z. Feldman, P.J. Compton, and G.A. Smythe. Towards hypothesis testing: JUSTIN, prototype system using justi cation in context. In Proceedings of the Joint Australian Conference on Arti cial Intelligence, AI '89, pages 319{331, 1989.

12] C.L. Forgy. RETE: A fast algorithm for the many pattern/many object pattern match problem. Arti cial Intelligence, pages 17{37, 19 1982. 13] M.R. Genesereth. The use of design descriptions in automated diagnosis. Arti cial Intelligence, 24:411{436, 1984. 14] W. Hamscher. Explaining unexpected nancial results. In P. O'Rourke, editor, AAAI Spring Symposium on Automated Abduction, pages 96{100, 1990. 15] K. Hirata. A classi cation of abduction: Abduction for logic programming. In Proceedings of the Fourteenth International Machine Learning Workshop, ML-14, page 16, 1994. 16] Y. Iwasaki. Qualitative physics. In P.R. Cohen A. Barr and E.A. Feigenbaum, editors, The Handbook of Arti cial Intelligence, volume 4, pages 323{413. Addison Wesley, 1989. 17] A.C. Kabas and P. Mancrella. Generalized stable models: A semantics for abduction. In ECAI-90, 1990. 18] K. Konoligue. Abduction versus closure in causal theories. Arti cial Intelligence, 53:255{272, 1992. 19] J.E. Laird and A. Newell. A universal weak method: Summary of results. In IJCAI '83, pages 771{773, 1983. 20] D.B. Leake. Focusing construction and selection of abductive hypotheses. In IJCAI '93, pages 24{29, 1993. 21] H. Levesque. A knowledge-level account of abduction (preliminary version). In IJCAI '89, volume 2, pages 1061{1067, 1989. 22] A.K. Mackworth. Consistency in networks of relations. Arti cial Intelligence, 8:99{118, 1977. 23] T. Menzies, A. Mahidadia, and P. Compton. Using causality as a generic knowledge representation, or why and how centralised knowledge servers can use causality. In Proceedings of the 7th AAAI-Sponsored Ban Knowledge Acquisition for Knowledge-Based Systems Workshop Ban , Canada, October 11-16, 1992. 24] T. J. Menzies and P. Compton. The (extensive) implications of evaluation on the development of knowledge-based systems. In Proceedings of the 9th AAAI-Sponsored Ban Knowledge Acquisition for Knowledge Based Systems, 1995. Available from //www.sd.monash.edu.au/ timm/pub/docs/papers.html. 25] T.J. Menzies. An overview of limits to knowledge level-B modeling (and KADS). In AI '95, 1995. Submitted. Available from //www.sd.monash.edu.au/ timm/pub/docs/papers.html. 26] T.J. Menzies. Principles for Generalised Testing of Knowledge Bases. PhD thesis, University of New South Wales, 1995. 27] T.J. Menzies. Situated semantics is a side-e ect of the computational complexity of abduction. In Australian Cognitive Science Society, 3rd Conference, 1995. Available from //www.sd.monash.edu.au/ timm/pub/docs/papers.html. 28] T.J. Menzies and W. Gambetta. Exhaustive abduction: A practical model validation tool. In ECAI '94 Workshop on Validation of Knowledge-Based Systems, 1994. Available from //www.sd.monash.edu.au/ timm/pub/docs/papers.html. 29] H.T. Ng and R.J. Mooney. The role of coherence in constructing and evaluating abductive explanations. In Working Notes of the 1990 Spring Symposium on Automated Abduction, volume TR 90-32, pages 13{17, 1990. 30] P. O'Rourke. Working notes of the 1990 spring symposiumon automatedabduction. Technical Report 90-32, University of California, Irvine, CA., 1990. September 27, 1990. 31] M. Pagnucco, A.C. Nayak, and N.Y. Foo. Abductive expansion: Abductive inference and the process of belief change. In J. Debenham C. Zhang and D. Lukose, editors, AI '94, Australia, 1994. 32] C.L. Paris. The use of explicit user models in a generation system for tailoring answers to the user's level of expertise. In A. Kobsa and W. Wahlster, editors, User Models in Dialog Systems, pages 200{232. Springer-Verlag, 1989. 33] D. Poole. On the comparison of theories: Preferring the most speci c explanation. In IJCAI '85, pages 144{147, 1985. 34] D. Poole. Normality and faults in logic-based diagnosis. In IJCAI '89, pages 1304{1310, 1989. 35] D. Poole. Hypo-deductive reasoning for abduction, default reasoning, and design. In P. O'Rourke, editor, Working Notes of the 1990 Spring Symposium on Automated Abduction., volume TR 90-32, pages 106{110, 1990. 36] D. Poole. A methodology for using a default and abductive reasoning system. International Journal of Intelligent Systems, 5:521{548, 1990. 37] A.D. Preece and R. Shinghal. Verifying knowledge bases by anomaly detection: An experience report. In ECAI '92, 1992. 38] J. Reggia, D.S. Nau, and P.Y Wang. Diagnostic expert systems based on a set covering model. Int. J. of Man-Machine Studies, 19(5):437{460, 1983. 39] R. Reiter. A logic for default reasoning. Arti cial Intelligence, 13:81{132, 1980. 40] R. Reiter. A theory of diagnosis from rst principles. Arti cial Intelligence, 32:57{96, 1 1987. 41] P.S. Rosenbloom, J.E. Laird, and A. Newell. The SOAR Papers. The MIT Press, 1993. 42] B. Selman and H.J. Levesque. Abductive and default reasoning: a computational core. In AAAI '90, pages 343{348, 1990. 43] D.M. Steier. CYPRESS-SOAR: A case study in search and learning in algorithm design. In P.S. Rosenbloom, J.E. Laird, and A. Newell, editors, The SOAR Papers, volume 1, pages 533{536. MIT Press, 1993. 44] M.R. Wick and W.B. Thompson. Reconstructive expert system explanation. Arti cial Intelligence, 54:33{70, 1992. 45] B.J. Wielinga, A.T. Schreiber, and J.A. Breuker. KADS: a modeling approach to knowledge engineering. Knowledge Acquisition, 4:1{162, 1 1992.

Vous aimerez peut-être aussi