Answer Set Based Design of Knowledge Sys
Answer Set Based Design of Knowledge Sys
discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/220642731
CITATIONS READS
49 37
3 authors:
Monica Nogueira
University of North Carolina at Chapel Hill
19 PUBLICATIONS 332 CITATIONS
SEE PROFILE
All content following this page was uploaded by Marcello Balduccini on 10 January 2017.
The user has requested enhancement of the downloaded file. All in-text references underlined in blue are added to the original document
and are linked to publications on ResearchGate, letting you access and read them immediately.
Noname manuscript No.
(will be inserted by the editor)
1 Introduction
resentation and reasoning [42,25, 10]. The language is expressive and has a well
understood methodology of representing defaults, causal properties of actions and
fluents, various types of incompleteness, etc.
In this paper we describe an A-Prolog based methodology for modeling of and
reasoning about complex dynamic systems. We show that A-Prolog can be used
to specify all the elements of the model: the specification of the initial situation,
the causal laws that rule the evolution of the domain, the reasoning modules, and
the control knowledge used to guide the reasoning processes.
We also show how our methodology can be enhanced, using a recently developed
extension of A-Prolog called CR-Prolog [7,9], to allow the specification of re-
quirements that the solutions found by the reasoning modules should satisfy if at
all possible. The addition of such requirements substantially improves the quality
of reasoning.
We describe our methodology by showing its application to building USA-
Advisor, a decision support system for the Reaction Control System (RCS) of
the Space Shuttle. This application builds on a previous investigation [46,14, 30],
where a substantially smaller part of RCS was represented in Prolog and used to
check correctness of plans.
The RCS is the Shuttle’s system that has primary responsibility for maneuvering
the aircraft while it is in space. It consists of fuel and oxidizer tanks, valves and
other plumbing needed to provide propellant to the maneuvering jets of the Shut-
tle. It also includes electronic circuitry: both to control the valves in the fuel lines
and to prepare the jets to receive firing commands. Overall, the system is rather
complex, in that it includes 12 tanks, 44 jets, 66 valves, 33 switches, and around
160 computer commands (computer-generated signals).
When an orbital maneuver is required, the astronauts must configure the RCS
accordingly. This involves changing the position of several switches, which are
used to open or close valves or to energize the proper circuitry. Normally, the
sequences of actions required to configure the RCS are pre-determined before the
beginning of the mission and the astronauts simply have to search for the sequence
in a manual. However, faults (e.g. the inability to move a switch) may make these
pre-scripted sequences of actions inapplicable. The number of possible sets of
failures is too large to plan in advance for all of them. In this case, the astronauts
communicate the problem to the ground flight controllers, who come up with a
new sequence of actions to perform the desired task. The main challenge of this
step is to find plans that achieve the desired results without causing any possibly
dangerous side effect.
USA-Advisor can be viewed as a part of a decision support system for Shuttle
flight controllers. It is an intelligent system capable of verifying and generating
plans that prepare the RCS for a given maneuver. As such, it can be used when
the flight controllers have to come up with a plan for an emergency situation. Of
course, it can also be used “off-line” to pre-determine, before the beginning of the
mission, the plans for possible fault conditions.
The main issues involved in building the USA-Advisor are:
3
2 A-Prolog
The informal reading of the rule (in terms of the reasoning of a rational agent
about its own beliefs) is “if you believe l1 , . . . , lm and have no reason to believe
lm+1 , . . . , ln , then believe one of h1 , . . . , hk .” The connective “not” is called default
negation.
A rule such that k = 0 is called constraint, and is considered a shorthand of:
The reader may have noticed that, like in [25], negated atoms, ¬p, are not allowed
to occur in s-atoms. However, differently from there, we allow negated atoms to
be used everywhere else in the program.
Notice that the combination of sets with classical and default negations introduces
some subtleties. Consider the following informal argument. Suppose we are given
a statement {X : p(X)} ⊆ {X : q(X)} and we know q(a) and ¬q(b), but have
no information about q(c). Clearly, p(a) satisfied the condition. But can we about
¬p(b)? And what about p(c) or ¬p(c)?
To restrict ourselves to cases in which the meaning of s-atoms is unambiguous,
we give the following definition of A-Prolog program.
Definition 8 An A-Prolog program is pair hΣ , Π i, where Σ is a signature, Π is
a set of A-Prolog rules, and for every atom r(X) that occurs in the scope of an
s-atom, Π contains the rule:
{X : p(X)} ⊆ {X : q(X)} ← Γ
are called selection rules. It can be noted that selection rules are closely related to
the choice rules
m{p(X) : q(X)}n ← Γ (4)
introduced in [45,39]. Proposition 5 of [25] makes this connection precise.
Adapted to the language used here, the proposition states the following.
One limitation of our definition of A-Prolog with respect to the language of [45,
39] is that it does not allow the specification of bounds, i.e. of the lower and upper
number of elements of the subset defined by {X : p(X)} ⊆ {X : q(X)}. For
simple bounds such as those used in this paper (we use only an upper bound of
1) we will use simple constraints, and avoid the introduction of the f-atoms from
[25]. For example, imposing a maximum limit of 1 on the cardinality of the set
(assuming that the arity of p is 1) can be achieved by means of the constraint:
← p(X1 ), p(X2 ), X1 6= X2 .
m{p(X) : q(X)}n ← Γ .
The RCS is the system used to maneuver the Space Shuttle while it is in orbit.
The RCS is viewed as composed of three subsystems: the Forward RCS, the Left
RCS, and the Right RCS.
The propellants for the RCS jets, or thrusters, are stored in fuel and oxidizer
tanks, pressurized with helium, and are distributed through several different types
of pressure regulation and relief valves, distribution lines (here called plumbing)
and filling and draining connections, called junctions. The only physical connec-
tion among the subsystems of the RCS is an interconnection between the Left
and Right subsystems, called crossfeed. This provision is part of the redundancy
capabilities added to the Space Shuttle to ensure the safety of its operation.
8
In order for the Space Shuttle to perform a given maneuver, a set of jets, belonging
to the correct subsystems and pointing in the correct directions, must be prepared
to fire. Preparing a jet to fire involves providing an open, non-leaking path for the
fuel to flow from pressurized fuel tanks to the jet. The flow of fuel is controlled
by opening and closing pressure regulation and relief valves. Valves are opened
and closed by either having an astronaut flip a switch or by instructing the on-
board computer to issue special commands. In a very simplified form, the RCS
can be viewed as a directed graph of the type shown in Figure 2, whose nodes are
tanks, jets and pipe junctions, and whose arcs are labeled by valves. Switches are
connected to valves through fairly complex electrical circuits.
at improving both the quality of plans and the efficiency of the planner. Additional
modules provide the description of the schematics of each electrical circuit.
The model of the RCS is based on the research on action languages [29]. As
the reader will probably notice, the rules used in the model are a straightforward
translation of the causal laws of action language A L [12].
The connections among the modules are depicted in Figure 1. In the rest of this
section we give a detailed description of each module.
ready_to_fire(J)
VCM
info on stuck valves
Basic VCM Extended VCM
occurrences of
computer commands
occurrence of info on
switch flippings stuck switches
The Plumbing Module (PM) models the plumbing system of the RCS, which con-
sists of a collection of tanks, jets and pipe junctions connected through pipes. The
flow of fluids through the pipes is controlled by valves. The system’s purpose is
to deliver fuel and oxidizer from tanks to the jets needed to perform a maneuver.
The structure of the plumbing system is described by a directed graph G of the
type shown in Figure 2, whose nodes are tanks, jets and pipe junctions, and whose
10
arcs are labeled by valves. The possible faults of the system at this level are leaky
valves, damaged jets, and valves stuck in some position. The purpose of PM is to
describe how faults and the position of valves affect the pressure of tanks, jets and
junctions. This is accomplished by means of state constraints alone.
Sw1
Tank Junc1
V1
Sw4
Sw3
V3 V4
V2
Junc2 Jet
Sw2
Figure 2 A simplified view of the RCS.
In particular, when fuel and oxidizer flow at the right pressure from the tanks to a
properly working jet, the jet is considered ready to fire. In order for a maneuver to
be started, all the jets it requires must be ready to fire. Pressurization of fuel and
oxidizer tanks is obtained by releasing helium from the helium tanks connected to
the fuel and oxidizer tanks. The necessary condition for a fluid to flow from a tank
to a jet, and in general to any node of G, is that there exists a path without leaks
from the tank to the node and that all valves along the path are open.
The rules of PM define a function which takes as input the structural description,
G, of the plumbing system, its state including position of valves and the list of
faulty components, and determines: the distribution of pressure through the nodes
of G; which jets are ready to fire; which maneuvers are ready to be performed.
The elements of the plumbing system are represented in PM as follows. The arcs
of graph G are described by relation link(N1,N2,V) which holds iff G contains a di-
rected arc from node N1 to N2 and this arc is labeled by the valve V . For instance,
a statement link(ffh,ff,ffha) says that fuel helium tank ffh is connected to fuel pro-
pellant tank ff by valve ffha. Relation jet of(J,R) identifies jets and the subsys-
tem they belong to. The subsystems of the RCS are identified by statements: sys-
tem(fwd rcs), system(left rcs), and system(right rcs). Relation direction(J,D) spec-
ifies the direction of jets. There are six different possible directions different jets
11
point to: up, down, left, right, forward, and aft. Relation tank of(T,R) links each
tank to the subsystem it belongs to. For instance, a statement tank of(ffh,fwd rcs)
says that the forward fuel helium tank belongs to the forward subsystem. There
are twelve possible maneuvers to be performed by firing jets of Shuttle, encoded
by atoms of the form maneuver(M).
The initial state of the plumbing module is mainly characterized by fluent
in state(V,S), specifying that valve V is in state S (open or closed), and a collection
of faulty components described by atoms of the form has leak(V), damaged(J) and
stuck(V,S) (valve V is stuck in position S). The role of defaults is essential for a
compact description of the initial state. For example, it is assumed that all helium
tanks are pressurized in the initial state and that normally functioning valves are
initially closed. This statement can be nicely expressed using the default:
holds(in state(V,closed),0) :-
¬holds(in state(V,open),0).
Here and in the rest of the discussion variable V denotes a valve.
Important fluents in the characterization of the current state of the RCS are pres-
surized by(N,TK), stating that fluid under pressure is flowing from tank T K to
node N (in short, “N is pressurized by T K”), and ready to fire(J), saying that jet
J is pressurized by the correct type of propellants and thus ready to fire (jets need
to be pressurized with both fuel and oxidizer).
The Shuttle is ready for a maneuver M when an appropriate set of jets is ready to
fire. To increase the efficiency of reasoning, we partition such a set of jets based on
the subsystem the jets belong to. Fluent maneuver ready(M,R) says that the jets of
subsystem R involved in maneuver M are ready. For example, the following rule
determines when the left subsystem is ready for the maneuver called “+x”, which
only requires one aft-pointing jet in the left subsystem.
holds(maneuver ready(plus x,left rcs),T) :-
jet of(J,left rcs),
direction(J,aft),
holds(ready to fire(J),T).
To further illustrate the issues involved in the construction of PM, let us consider
the definition of fluent pressurized by(N, T k). Helium tanks are treated as special
nodes and presently assumed to be always pressurized. Hence, the definition for
these tanks is trivial. For other nodes, the definition is recursive. It says that any
non-tank node N1 is pressurized by a tank T k if N1 is not leaking and is connected
by an open valve to a node N2 which is pressurized by T k.
holds(pressurized by(N1,Tk),T) :-
link(N2,N1,V),
¬holds(leaking(N1),T),
holds(in state(V,open),T),
holds(pressurized by(N2,Tk),T).
In the RCS, a node is considered leaking if propellant flow toward it is regulated
by a leaking, open valve, and there is a path from such valve to the node along
which all valves are open. This can be nicely formalized by combining recursive
12
definitions and defined fluents,i.e. fluents that are considered false (resp., true) by
default. The next rules show the formalization of defined fluent leaking(N):
holds(leaking(N1),T) :-
link(N1,N2,V), has leak(V),
holds(in state(V,open),T).
¬holds(leaking(N),T) :-
not holds(leaking(N),T).
Notice that fluent leaking is non-inertial, and the key step in its representation is
the use of the Closed World Assumption, encoded above as a default. The recur-
sive step is obtained by:
holds(leaking(N1),T) :-
link(N1,N2,V),
holds(in state(V,open),T),
holds(leaking(N2),T).
The high level of abstraction of A-Prolog is confirmed by the relatively small
number of rules present in the knowledge modules of USA-Advisor. For example,
the Plumbing Module consists of approximately 40 rules.
As usual, default rules are used to represent the inertia axiom. All the modules
share the same inertia axiom: (we have a similar rule for ¬holds(L, T ))
holds(L,T+1) :-
not non inertial(L),
holds(L,T),
not ¬holds(L,T+1).
In the rule, variable L ranges over all fluent literals. Relation non inertial(L) is de-
fined for non-inertial fluents such as leaking, and allows to stop the inertia axiom
from being applied to them.
The flow of fuel and oxidizer propellants from tanks to jets is controlled by open-
ing/closing valves along the path connecting these nodes. The state of valves can
be changed either by manipulating mechanical switches or by issuing computer
commands. Switches and computer commands are connected to the valves, they
control, by electrical circuits.
In some specific phases of operation of the Shuttle, such as launch and landing, the
on-board general purpose computers, GPCs, is in charge of opening/closing valves
and will achieve this objective by sending computer commands. If the Shuttle
is in orbit, or the computer system is malfunctioning, an astronaut can normally
override these commands by manually flipping the switches that control the valves
to be opened/closed.
The Switch Position Module, SPM, describes how the actions of flipping switches
and the faults present in the system affect the position of switches. The only type of
13
fault considered in the SPM is switches being stuck. Throughout the model of the
RCS, this type of mechanical malfunctioning is represented by relation stuck(D,S),
stating that device D (in the model, D ranges over switches and valves) is stuck
in state S. Similarly to the plumbing module, the state of devices is described by
the fluent in state(D,S) meaning that device D is in state S. A device is always in a
state S if it is stuck in state S. The input to the SPM contains information on stuck
switches and the occurrences of switch flippings. The output of the module con-
sists of the position of switches resulting from the execution the specified actions.
The effect of the actions performed on normally functioning switches is defined
by the dynamic causal law below. The law says that flipping a working switch Sw
to state S causes it to move to that state.
holds(in state(Sw,S),T+1) :-
occurs(flip(Sw,S),T),
not stuck(Sw,S’).
Notice the use of default negation in the rule to express the Closed World As-
sumption on the information on stuck switches. A more common approach would
consist in replacing default negation by classical negation and in encoding sepa-
rately the Closed World Assumption on relation stuck. Our choice to use default
negation directly is motivated by performance considerations.
The fact that a switch Sw is always in state S if it stuck in S, is formalized by the
rule:
holds(in state(Sw,S),0) :- stuck(Sw,S).
The Valve Control Module, VCM, describes how computer commands and
changes in the position of switches affect the state of valves. Intuitively, if a switch
Sw is in position “open” or “closed”, the valve(s) it controls are normally in that
state state. Computer commands issued when the appropriate switch is in special
position “gpc” cause the corresponding valve(s) to either open or close (depend-
ing on type of computer command). There are, however, two types of possible
failures: valves can be stuck in some position, and electrical circuits can malfunc-
tion in various ways.
A substantial simplification of the VCM module is achieved by dividing it in two
parts, called basic and extended VCM modules. At the basic level, it is assumed
that all electrical circuits are working properly and therefore are not included in the
representation. The extended level includes information about electrical circuits
and is normally used when some of the circuits are malfunctioning. In that case,
the position of switches and the occurrence of computer commands may produce
results that cannot be predicted by the basic representation.
At this level, the VCM deals with a set of switches, computer commands and
valves, and connections among them. The input of the basic VCM consists of the
positions of switches, the faults of valves, and the collection of computer com-
mands issued. The module implements an lp-function that, given this input, re-
turns positions of valves at the current moment of time. This output is used as
14
input to the plumbing module. The class of faults of the system considered at this
level consists of valves being stuck in some position.
Connections between devices (i.e. switches and valves) are described by relation
controls(Sw,V,C) meaning that switch Sw controls the state of valve V through
circuit C (circuits are reified). The connection between computer commands and
valves is modeled by atoms of the form commands(CC,V,S) (“computer command
CC moves valve V to position S”) and commands(cc(CC1,CC2),V,S) (“computer
commands CC1 and CC2 used together move V to S”).
An electrical malfunctioning of the circuitry controlling valve V is represented by
statement of the form bad circuitry(V) (in the RCS, each valve is controlled by no
more than one circuit).
The dynamic behavior of the basic VCM is described by a set of fluents and ac-
tions. Actions are represented as follows:
– action of(flip(Sw,S),R) - flipping switch Sw to state S is an action of the R
subsystem of the RCS.
– action of(cc(CC1,CC2),R) - issuing a pair of computer commands CC1 and
CC2 is an action of the R subsystem of the RCS.
– action of(CC,R) - issuing computer command CC is an action of the R subsys-
tem of the RCS.
holds(in state(Sw,S1),T),
state of(S,v switch), neq(S1,gpc),
not stuck(V,S2), not bad circuitry(V).
It is assumed that a valve V is always in state S if it stuck in S, as defined by rule:
holds(in state(V,S),0) :- stuck(V,S).
Impossibility conditions are described by constraints. The VCM description in-
cludes such a constraint to express that it is not possible to move a switch to a
state it is already in.
:- holds(in state(Sw,S),T),
state of(S,v switch),
occurs(flip(Sw,S),T).
This constraint eliminates any models where an action f lip tries to move a switch
Sw, which is in state S, to the same state S. Constraints of this type play an impor-
tant role in increasing efficiency of the module by reducing the search space for
plans.
The output of the VCM is a description of the state of valves and switches at the
current moment of time.
The extended VCM encompasses the basic VCM and also includes information
about electrical circuits, power and control buses, and the wiring connections
among all the components of the system.
This module, too, defines an lp-function. It takes as input the same information as
the basic VCM, together with faults on power buses, control buses and electrical
circuits. The extended VCM returns positions of valves at the current moment of
time, exactly like the basic VCM.
Since (possibly malfunctioning) electrical circuits are part of the representation,
it is necessary to compute the signals present on all wiring connections, in order
to determine the positions of valves. The signals present on the circuit’s wires are
generated by the Circuit Theory Module (CTM), included in the extended VCM.
Large part of this module was developed independently to address a different col-
lection of tasks [8]. The part of the CTM used by the USA-Advisor is described
in the next section. Figure 3 below shows the connection between the Extended
Valve Control Module and the Circuit Theory Module.
16
Positions
of valves
Signals to
valves Circuit
Extended
VCM Theory
Module
Signals on
input wires
of circuits
Input to
Extended
VCM
Figure 3 Connection between Extended VCM and Circuit Theory Module.
Rules describing the behavior of three-pin and four-pin valves are similar.
Power and output pins work in the same way for all types of valves. Of the two
valve output pins one is labeled “open”, and the other “closed”. When a valve
is in state “open”, an electrical connection is established between the power pin
and the “open” output pin, while the “closed” output pin is disconnected. Wires
connected to the output pins are represented by statements output(W,V), which
says that wire W is an output wire of valve V , and output of type(W,S), stating
that output wire W corresponds to state S. Values on output wires of both solenoid
and motor controlled valves are determined by rule
holds(value(W,1),T) :-
of type(V,valve),
output(W,V), output of type(W,S),
input(Wp,V), input of type(Wp,power bus),
holds(in state(V,S),T),
holds(value(Wp,1),T).
This rule expresses that if valve V is in state S at time T , then the value on the
output wire (corresponding to S) of V is 1 at T when V is powered.
Values on output wires of a valve V indicate the state of V , and are therefore
mutually exclusive under normal behavior. If an output wire has value 1 at time T ,
then the value on the other output wire is 0 at T . This behavior is defined by rule
holds(value(W2,0),T) :-
of type(V,valve),
output(W1,V), output(W2,V), neq(W1,W2),
holds(value(W1,1),T).
If a valve has no power (abnormal condition) then all its output wires have value
0, which is specified by rule
holds(value(W,0),T) :-
of type(V,valve), output(W,V),
input(Wp,V), input of type(Wp,power bus),
holds(value(Wp,0),T).
The behaviors described for switches and valves are valid provided that no faults
are involved. If a switch is stuck in some position, flipping has no effect. If a valve
is stuck in some position, signals on the input pins are not effective. If a power or
control bus is faulty, its output is constantly 0. Stuck devices are represented by
stuck(D,S) as in the basic valve control module. Faulty power buses and control
buses are described by statement bad device(B).
Given the type of a valve V , values on input wires of V at time T , malfunctioning
conditions expressed by stuck(V,S), and the state of V at time T − 1, the program
determines the state of V and the values present on its output wires at moment T .
The electrical circuits of the RCS are composed of both analog and digital com-
ponents. Circuits are named through statements of the form elec circ(C). In the
extended level of the VCM, a digital gate or component, G, can malfunction
if its input/output wire W is stuck at a value X (0 or 1), defined by statement
19
stuck at(W,G,X). If this is the case, the representation of the electrical circuit(s)
these gates belong to, are also included as part of the module. However, it is not
necessary to add the representation of circuits that are working properly. To indi-
cate that circuit C connected to a valve V is malfunctioning we add rule
bad circuitry(V) :-
bad circuitry(C),
controls(Sw,V,C).
The behavior of different components of electrical circuits is described within the
circuit theory module.
The Space Shuttle flight computer software is contained in its five general purpose
computers (GPCs) which control the vehicle during specific phases of a flight.
This software allows control of all RCS activity being responsible for transmitting
commands for valve configuration and jet firings. If a switch is placed in GPC
state, computer commands can be output to open or close the affected valves.
Issuing a computer command is represented as an action that will affect a target
device D by setting D to a new state. At the extended level of the VCM, issuing
computer commands is expressed by a dynamic causal law that asserts value 1 on
the wire W that connects the computer to a component of an electrical circuit. The
rule defining this behavior is
holds(value(W,1),T+1) :-
commands(CC,V,S), output(W,CC),
occurs(CC,T).
Normally, i.e. in the absence of computer commands, a signal value 0 is assigned
to the wire that connects a component of an electrical circuit to the computer, as
follows
holds(value(W,0),T) :-
commands(CC,V,S), output(W,CC),
¬holds(value(W,1),T).
Wires connected to the output pins of computer commands, as well as power buses
and control buses, are identified by output(W,E), where E is either a computer
command, a power bus or a control bus.
The extended VCM, without the Circuit Theory module, consists of 36 rules.
The Circuit Theory Module (CT M) is a general description of normal and faulty
behavior of components of electrical circuits with possible propagation delays
and 3-valued logic. It can also be used as a stand-alone application for simulation,
computation of the maximum delay of a circuit, detection of glitches, and other
tasks.
A large portion of the CT M was independently developed as part of the A-Circuit
project [8]. Because of the modularity of our design, it has been possible to di-
rectly include the CT M in the USA-Advisor system. Some additions were neces-
sary to account for more complex components used in the RCS. More importantly,
20
we extended the model to allow the representation of faulty components. The de-
scription of the CT M is beyond the scope of this paper, but it is important to stress
the central role of recursion in the state constraints of the CT M. The interested
reader can refer to [41] for an in-depth discussion.
Next, we analyze the planning module used in USA-Advisor. For simplicity of
presentation we start our discussion by describing the basic structure of the mod-
ule. Section 4.5 contains an elaboration of the basic module obtained by adding
control knowledge. Section 5 describes a further improvement based on an exten-
sion of A-Prolog.
The Basic Planning Module of the USA-Advisor establishes a simple search crite-
ria used by the program to find a plan. The structure of the Basic Planning Module
described in this section follows the generate and test approach from [21,36, 42].
The main idea of this approach consists in establishing a one-to-one correspon-
dence between plans for achieving a goal G in at most a given number, lasttime,
of steps and answer sets of a logic program PG . This program normally consists
of (a) a large part describing our knowledge about the corresponding dynamic
system, and (b) a smaller part containing specification of a goal, a special rule
“generating” actions needed to achieve this goal, and possibly some other rules
describing properties of the desired plans. The following discussion illustrates this
idea. Notice that we differ from the standard answer set planning approach in that
we take advantage of the fact that the RCS consists of three, largely independent,
subsystems. A plan for the RCS is viewed as the composition of three separate
plans that can operate in parallel.
The following rules form the heart of the planner. The first rule, which is respon-
sible for the generation of occurrences actions, states that, for each time point, T ,
in a given finite interval, if the goal has not been achieved for subsystem R, then
an action controlling subsystem R may occur at T .
0{occurs(A,T):action of(A,R)}1 :-
T < lasttime, subsystem(R),
not goal(T,R).
Informally, not goal(T, R) means “if the goal has not been achieved at step T for
subsystem R.”
The goal of preparing for such a maneuver is also split into subgoals, each prepar-
ing a particular subsystem. The first rule below states that the overall goal has been
achieved if every subsystem is ready for the current maneuver.
goal :-
selected maneuver(M),
holds(maneuver ready(M,left rcs),T1),
holds(maneuver ready(M,right rcs),T2),
holds(maneuver ready(M,fwd rcs),T3).
:- not goal.
21
The second rule above is a constraint that states that the overall goal must be
achieved in every model.
Splitting the RCS into subsystems allowed us to substantially improve the effi-
ciency of the module because of the reduction in the length of plans. For instance,
in some cases, it allowed us to reduce the time to find a plan of 5 steps from a few
hours to a few seconds. Notice that, since there actually are some dependencies be-
tween some subsystems, a very small number of extremely rare (and undesirable)
plans can be missed. It is possible to extend the planning module in order to find
these plans. The interested reader may refer to [4] to see how this is accomplished.
Since the RCS contains more than 200 actions, with rather complex effects, and
may require long plans, the standard planning approach described above can still
be too slow, and needs to be substantially improved. This is done by addition of
various forms of heuristic, domain-dependent4 , information. We refer to the Basic
Planner expanded by such heuristics as Smart Planner.
In this section we will discuss the expansion of the basic planner by useful heuris-
tic information, including control knowledge. The usefulness of control knowl-
edge for planning has been investigated in [1,34, 32, 3], but comparatively little is
known about the influence of heuristics in answer set planning (see however [13]).
Such knowledge can be classified into two categories: domain dependent and do-
main independent. Both types of heuristics work by either limiting the combina-
tions of actions that can occur or by declaring that certain situations are illegal.
In either case the heuristics help prune the search space, leading to increased effi-
ciency, and improving plan quality by eliminating unwanted plans.
Some of the control knowledge used in the USA-Advisor can easily be included
for planning in other domains. An example of such domain independent knowl-
edge is the statement “Do not repeat actions already performed.” Note that, while
this rule does not apply in all domains, in many an optimal plan will never in-
clude the same action twice. This rule can be easily encoded in A-Prolog as the
following constraint:
:- action of(A,R), neq(T1,T2),
occurs(A,T1), occurs(A,T2).
USA-Advisor contains also a number of domain specific heuristics. The first ex-
ample shown here states that a switch should not be moved to the gpc (general
purpose computer) position unless the following action is to issue a computer
command to the valve related to that switch.
:- controls(Sw,V),
occurs(flip(Sw,gpc),T),
not issued commands(V,T+1).
4 Notice that the addition does not affect the generality of the algorithm.
22
Note that while there are valid plans for the operation of the RCS which do not
obey this rule, for each of them there is a plan containing exactly the same actions
which does obey it. This allows us to further prune the search space.
More domain-dependent rules embody common-sense knowledge of the type “do
not pressurize nodes which are already pressurized.” In the RCS, some nodes can
be pressurized through more than one path. Clearly, performing an action in order
to pressurize a node already pressurized will not invalidate a plan, but this involves
an unnecessary action. Although we do not claim the plans computed are optimal,
the shortest sequence of actions to achieve the goal is a good candidate as the
optimal plan(s). The following constraint eliminates models where more than one
path to pressurize a node N2 is open.
:- link(N1,N2,V1), link(N1,N2,V2), neq(V1,V2),
holds(in state(V1,open),T),
holds(in state(V2,open),T),
not stuck(V1,open), not stuck(V2,open).
The Planning Module contains approximately 20 rules of which 15 are heuristics.
Next, we discuss the lessons learned from the development of USA-Advisor de-
scribed in the previous sections. In Section 5, we explain how the use of an exten-
sion of A-Prolog later allowed us to substantially improve the quality of reasoning
carried out by the system.
4.6 Discussion
The Smart Planner is to the best of our knowledge the largest and most sophisti-
cated answer set planner in existence. Below are some lessons we learned from its
design and implementation.
– The same formalization of the domain can be used for other reasoning tasks
than planning. All that is needed is replacing the reasoning module, as shown
later in Section 6 and in more detail in [5].
– The heuristics used in the Smart Planner were easy to encode and to use. More-
over, our experiments show that they significantly improve both, quality of
plans and efficiency of search.
– The planner’s ability to mix parallel and sequential plans5 and to efficiently
search for them are the key ingredients in the success of the project.
Overall, answer set planning proved to be a good tool for our purpose. We are not
aware of any other tool which would allow us to deal with the complex effects of
actions of the RCS.
Experiments show that the system is also quite efficient and meets the criteria for
use by NASA stating that a plan should be found in at most 20 minutes. In fact,
in our experiments the threshold has been exceeded in only 2 cases out of 2000,
while the average time to find a plan has been about 11 seconds – far lower than
the 20 minute threshold. Partly this is due to non-numerical nature of the problem.
The fact that despite a large number of concurrent actions involved, the plans were
comparatively short also contributed to the efficiency. To expand the applicability
of answer set planning and reasoning to hybrid systems, i.e. systems involving
“continuous” time and numerical computations we need to substantially extend
the existing answer set solvers.
Let us now look in more detail at how the experiments were performed and at the
results. To assess efficiency, we have randomly generated 2000 problem instances,
each specifying a set of faults and a maneuver to be performed. The instances are
partitioned in 10 sets of 200 elements, according to the number and type of faults
in them. Every set is denoted by a pair hmech, eleci, where mech and elect are
respectively the number of mechanical and electrical faults present in the instances
of the set. Table 1 shows the sets of instances used for our experiments (for further
details on the generation of the instances, the reader can refer to [41]).
Recall that our planner takes a parameter, lasttime, specifying the maximum al-
lowed length of the plans. In the experiments, we used an algorithm that, given a
problem instance, iteratively runs the planner, increasing lasttime by 1 if no plan
is found. When a plan is found the procedure terminates and the plan is returned.
The value of lasttime ranges between 3 and 10. We also included a timeout of
600 seconds for each call to the planner: if no plan is found within that time, the
planner is interrupted and a new iteration is performed. Notice that, if no timeout
occurs, this approach is guaranteed to find shortest plans, in terms of the corre-
sponding value of lasttime6 .
The average time (including all the calls to the planner that occur during the iter-
ations over lasttime) to find a plan of up to 10 steps or determine that none exists
are shown in Table 2. The table also includes the number of instances that do not
5 As we discussed earlier, the plans found by our planner consist of sequences of compound
actions, each containing at most one action per subsystem. The elements of each compound
action are to be executed concurrently.
6 But not necessarily in terms of number of actions, as we will see later.
24
ins-3-0 3 0
ins-5-0 5 0
ins-8-0 8 0
ins-10-0 10 0
ins-3-2 3 2
ins-5-3 5 3
ins-8-5 8 5
ins-10-3 10 3
ins-10-5 10 5
ins-10-7 10 7
have a solution if 10 steps or less. As the numbers show, some sets of instances
were quite hard.
Figures 4–8 give a graphical representation of the times to find a solution for each
instance. All the tests in this paper were performed on a Pentium 4 3.2GHz with
1.5GB RAM running NetBSD 3.99.7, lparse 1.0.13, and SMODELS 2.26.
ins-3-0 4.6443 7
ins-5-0 4.5658 28
ins-8-0 11.4887 63
ins-10-0 22.4169 96
ins-3-2 4.1187 60
ins-5-3 13.8207 102
ins-8-5 8.3934 138
ins-10-3 20.2484 177
ins-10-5 10.1764 162
ins-10-7 11.0357 143
Average 11.0909 976
25
400 1200
350
1000
300
800
Time (secs)
Time (secs)
250
200 600
150
400
100
200
50
0 0
600 600
500 500
400 400
Time (secs)
Time (secs)
300 300
200 200
100 100
0 0
1 20 40 60 80 100 120 140 160 180 200 1 20 40 60 80 100 120 140 160 180 200
Instance Number Instance Number
800
1400
700
1200
600
1000
500
Time (secs)
Time (secs)
800
400
300 600
200 400
100 200
0 0
14
16
14 12
12 10
Time (secs)
Time (secs)
10
8
8
6
6
4
4
2
2
0 0
1 20 40 60 80 100 120 140 160 180 200 1 20 40 60 80 100 120 140 160 180 200
Instance Number Instance Number
26
27
100
80
Time (secs)
60
40
20
100
80
Time (secs)
60
40
20
0
1 20 40 60 80 100 120 140 160 180 200
Instance Number
28
5.1 CR-Prolog
Π1 has one answer set: {pre f er(r1 , r2 )}. Notice that cr-rules are not applied, and
hence the preference atom has no effect. Now consider program Π2 = Π1 ∪ {r4 :
← not p, not q}. Now cr-rules must be used to restore consistency. Since r1 is
preferred to r2 , the answer set is: {p, pre f er(r1 , r2 )}. Finally, consider Π3 = Π2 ∪
{r5 : ← p}. Its answer set is: {q, pre f er(r1 , r2 )}.
For the definition of the semantics of CR-Prolog, refer to [5].
In several cases, “best” plans are selected based on some minimization criteria. An
interesting case is when we are given a set of requirements that plans should satisfy
if at all possible (e.g., “if at all possible, do not skip lunch”). Such requirements
are referred to as soft (or defeasible). In our approach, the satisfaction of soft
requirements is checked for in the test phase of the search.
In its simplest form, a soft requirement is encoded by a constraint and a cr-rule.
The body of the constraint contains:
– the encoding of the condition that plans should satisfy, according to the soft
requirement; the encoding is such that, if the requirement is not met, the body
of the constraint is satisfied;
– a condition (the inhibitor) that allows to stop the application of the constraint,
in case the soft requirement has to be violated.
For example, a possible constraint for the soft requirement “if at all possible, do
not skip lunch” is:
The cr-rule is used to say that, under some conditions, the constraint may possibly
be inhibited, but its inhibition should be a rare occurrence. The cr-rule for the soft
requirement above is:
+
allowed(skip(lunch)) ← .
which intuitively says that one may be possibly allowed to skip lunch.
30
If plans exist that do not violate the requirement, the cr-rule is not used. However,
if no such plan exists, the cr-rule is used to conclude that skipping lunch is allowed.
This inhibits the constraint, and allows the computation of plans violating the
requirement.
For another example, consider the encoding of the soft requirement “if possible
do not skip lunch; however, if you had a big breakfast, you are allowed to skip
lunch,” which consists of the rules:
The cr-rule informally says that, if one had a big breakfast, he may possibly be
allowed to skip lunch.
When several soft requirements are specified, one is often interested in ranking
them in order of preference, so that the most preferred soft requirements are the
ones that are less likely to be violated. Preferences statements of CR-Prolog pro-
vide a convenient way to encode such preferences. For example, consider the two
soft requirements:
together with the preference “skipping lunch is preferred over skipping dinner.”
The soft requirements can be encoded as before:
which says that (if one has to skip either dinner or lunch) skipping dinner should
be considered only if skipping lunch is not possible. It is important to stress that
preference statements of CR-Prolog allow to encode more complex criteria than
the one above, e.g. dynamic preferences such as “if you had a big breakfast, it is
better for you to skip lunch than skipping dinner; otherwise, skipping dinner is
preferred.” Such preference can be encoded in CR-Prolog with the rules:
The structure of the A-Prolog based planners, such as the one shown in Sec-
tion 4.4, can be easily extended to take defeasible requirements into account. Let
PLANPROB be a set of A-Prolog rules containing the encoding of a domain de-
scription as well as the specification of the goal and the planning module. Let also
SOFT REQ be the encoding of a set of defeasible requirements. The plans that
best satisfy the requirements can be found by computing the answer sets of:
The cr-rule says that the use of the crossfeed may possibly be allowed at any time
step T . The constraint says that it is impossible for action A of subsystem R to
occur at T if A opens a crossfeed valve, and the use of the crossfeed is not allowed
in R at time step T .
To see how the introduction of this requirement affects planning, consider a situa-
tion in which the RCS is functioning correctly and we need to perform a maneuver
that involves the use of the left and right subsystems.
Because of the absence of faults, the design of the RCS guarantees that the ma-
neuver can be performed without the use of the crossfeed. On the other hand, the
design also guarantees that the crossfeed can be used to achieve the goal. Hence,
the set of plans found by the planner from Sections 4.4 and 4.5 contains both plans
that use the crossfeed and plans that do not use it.
If the soft requirement described above is used, then the planner will return only
plans that do not use the crossfeed, as these are the “best” plans according to the
requirement. It is worth stressing the non-monotonic behavior of the planner: if
faults are later added to the description of the initial situation, so that the goal can
only be achieved with the use of the crossfeed, then the planner will be forced to
violate the soft requirement and to return plans that involve the crossfeed.
32
Another example of the use of soft requirements for USA-Advisor is the encod-
ing of the policy that “computer commands should be avoided if at all possible.”
(This policy is motivated by the fact that, normally, issuing a computer command
requires preparing and uploading a patch of the software of the on-board com-
puter.) The CR-Prolog encoding of the requirements is:
+
rccs (R, T ) : allowed(ccs(R, T )) ← subsystem(R).
The cr-rule says that, at any step T of the plan for subsystem R, the agent may be
possibly allowed to perform actions. The constraint says that it is impossible for
action A of subsystem R to occur at step T if the agent is not allowed to execute
actions on subsystem R at step T .
Experimental results confirm that the plans returned by CR-Plan are of a signif-
icantly higher quality than the plans generated by the basic planner described in
Sections 4.4 and 4.5.
We have applied CR-Plan to the problem instances from Section 4.6. The iteration
over the maximum plan length has been performed using the algorithm described
there. For these experiments, we have used CRMODELS 1.58 , an inference engine
for CR-Prolog recently developed [35].
The experiments have been performed in two sessions. In the first sessions, we
have removed from CR-Plan all the preference statements (see (6)–(7) above). The
resulting planner, called CR-Plan− , was tested on the 2000 problem instances. In
the second session, we added to CR-Plan− the preference statements (7) and tested
the resulting planner, called CR-Plan+ , on the same 2000 instances.
The use of CR-Plan− substantially increased the quality of plans with respect
to the A-Prolog based planner. Overall, computer commands and crossfeed were
used 119 times, as opposed to 3089 times by the A-Prolog planner, with an im-
provement of 96.15%. Moreover, in 569 cases, CR-Plan− returned plans that con-
tained less actions than the plans found by the A-Prolog planner (in no occasion
they were longer). The total number of irrelevant actions avoided by CR-Plan−
was 1595, corresponding to a reduction of 19.69% on the total number of actions
used (8102 for the A-Prolog planner and 6507 for CR-Plan− ).
Although the experiments were mainly aimed at assessing the quality improve-
ment, we found that the speed of CR-Plan− was still largely acceptable, in spite of
the substantial increase in the complexity of the task performed. Out of 2000 runs,
CR-Plan− exceeded NASA’s 20 minute threshold only 43 times, corresponding to
2.15% of the instances. The average time to complete one instance was 238 sec-
onds, far below the threshold. If we discard the 43 outliers, the average time goes
down to 156 seconds. The times for CR-Plan− are shown in Table 3.
8 Available from http://www.krlab.cs.ttu.edu/Software.
34
Table 3 Average times for CR-Plan− and the A-Prolog planner, grouped by set of instances.
Table 4 Average times for CR-Plan+ and comparison with the other planners
ing to note that in some cases is average time for CR-Plan+ is significantly smaller
than that of CR-Plan− . A possible explanation of the phenomenon is that the in-
troduction of preferences adds several constraints on the application of cr-rules,
and in some experiments this can help to reduce the search space.
35
5.4 Discussion
In the previous sections we have shown how our A-Prolog based methodology can
be used to model complex domains and to perform planning tasks.
An important feature of our methodology is that the domain model is independent
of the particular type of reasoning and can thus be shared by all the reasoning
modules.
To demonstrate this point, in this section we describe a simple diagnostic mod-
ule for the RCS that uses the same domain model as the planning module. The
interested reader may refer to [6] for an in-depth description of answer set based
diagnosis.
We view the diagnostic task as a reasoning process in which the agent explains
unexpected observations by making hypotheses on faults that may be present in
the system. In our approach, observations about fluents are encoded by statements
of the form obs(l,t), where l is a fluent literal and t is a time step. The statement
informally says that l was observed to hold at step t. Possible observations on
the state of the RCS are, for example, obs(pressurized by( f f 12 j, f f h), 3) and
obs(in state( f f m1, open), 5). Notice the difference between relation obs, which
encodes observations, and relation holds, which encodes the reasoner’s beliefs or
expectations.
Observations and expectations are linked in A-Prolog by the following set of ax-
ioms, RA:
holds(L, 0) ← obs(L, 0).
← obs(L, T ), holds(L, T ).
The first axiom provides a simple way to describe the initial situation (and can be
easily made more sophisticated, e.g. by introducing the Closed World Assump-
tion). The second axiom, called reality check, ensures that the reasoner’s expecta-
tions coincide with the observations.
Given a domain model M, a set H of statements of the form occurs(A, T ), spec-
ifying the actions performed, and a collection of observations O, the need for a
diagnosis can be verified by checking the consistency of
S = M ∪ RA ∪ H ∪ O.
If S is consistent, the reasoner can conclude that O contains no unexpected ob-
servations. Otherwise, S is called a symptom, and a diagnosis needs to be found.
37
7 Lessons Learned
Our methodology for representing knowledge about dynamic domains and for
designing reasoning modules proved to be scalable beyond small domains. The
key steps of the methodology are:
1. Identifying the relevant objects and relations in the domain.
2. Identifying the actions.
3. Describing the effects of the actions using the action language based approach.
The decisions made at step (1) heavily influence both the clarity of the model
and efficiency of the resulting system. An example of this is the modeling of the
junctions of the RCS, which improved to the model substantially.
The availability of state constraints also proved to be important for modeling
domains of size and complexity comparable to the RCS. We believe state con-
straints contributed substantially to the compact definition of fluents such as
pressurized by(N, T K) and of our general theory of electrical circuits.
Besides its already known applications, we found default negation useful in se-
lecting modules of the domain’s encoding. Consider for instance the way relation
bad circuitry(V ) influences the selection of the Basic and Extended Valve Con-
trol Modules: in the model, we only had to specify when the relation holds (thus
38
enabling the Extended Valve Control Module), while default negation was used to
determine when it does not hold. Without default negation, we would have been
forced to state explicitly when bad circuitry(V ) does not hold.
Control knowledge proved to be essential in improving the speed of reasoning.
Very frequently, this type of information could be found in the operating proce-
dures of the RCS.
In order to improve the speed of computation, it is also profitable to divide the ac-
tions in independent subsets (elements of which are executed concurrently) when-
ever possible.
The use of an external frontend is important to allow the automatical selection of
the appropriate modules and avoid problems due to the large size of the grounding
of the whole model.
8 Conclusions
Acknowledgements The authors would like to thank United Space Alliance for their continued
support. This work was partially supported by United Space Alliance under contract number
NAS9-20000 and by NASA under contract number NASA-NNG05GP48G.
References
1. F. Bacchus and F. Kabanza. Using Temporal Logic to Control Search in a Forward Chaining
Planner, pages 141–153.
9 USA-Advisor can be downloaded from http://www.krlab.cs.ttu.edu/Software.
39
2. F. Bacchus and F. Kabanza. Planning for Temporally Extended Goals. Annals of Mathe-
matics and Artificial Intelligence, 22(1-2):5–27, 1998.
3. F. Bacchus and F. Kabanza. Using Temporal Logics to Express Search Control Knowledge
for Planning. Artificial Intelligence, 16:123–191, 2000.
4. Marcello Balduccini. USA-Smart: Improving the Quality of Plans in Answer Set Planning.
In PADL’04, Lecture Notes in Artificial Intelligence (LNCS), Jun 2004.
5. Marcello Balduccini. Answer Set Based Design of Highly Autonomous, Rational Agents.
PhD thesis, Texas Tech University, Dec 2005.
6. Marcello Balduccini and Michael Gelfond. Diagnostic reasoning with A-Prolog. Journal
of Theory and Practice of Logic Programming (TPLP), 3(4–5):425–461, Jul 2003.
7. Marcello Balduccini and Michael Gelfond. Logic Programs with Consistency-Restoring
Rules. In Patrick Doherty, John McCarthy, and Mary-Anne Williams, editors, International
Symposium on Logical Formalization of Commonsense Reasoning, AAAI 2003 Spring
Symposium Series, pages 9–18, Mar 2003.
8. Marcello Balduccini, Michael Gelfond, and Monica Nogueira. A-Prolog as a tool for declar-
ative programming. In Proceedings of the 12th International Conference on Software En-
gineering and Knowledge Engineering (SEKE’2000), pages 63–72, 2000.
9. Marcello Balduccini and Veena S. Mellarkod. A-Prolog with CR-Rules and Ordered Dis-
junction. In ICISIP’04, pages 1–6, Jan 2004.
10. Chitta Baral. Knowledge Representation, Reasoning, and Declarative Problem Solving.
Cambridge University Press, Jan 2003.
11. Chitta Baral and Michael Gelfond. Logic Programming and Knowledge Representation.
Journal of Logic Programming, 19(20):73–148, 1994.
12. Chitta Baral and Michael Gelfond. Reasoning Agents In Dynamic Domains. In Workshop
on Logic-Based Artificial Intelligence, pages 257–279. Kluwer Academic Publishers, Jun
2000.
13. Chitta Baral and Le-Chi Tuan. Effect of knowledge representation on model based planning:
experiments using logic programming encodings. In Proceedings of 2001 AAAI Spring
Symposium on Answer Set Programming, pages 110–115, 2001.
14. Matthew Barry and Richard Watson. Reasoning about actions for spacecraft redundancy
management. In Proceedings of the 1999 IEEE Aerospace Conference, volume 5, pages
101–112, 1999.
15. Gerhard Brewka. Logic programming with ordered disjunction. In Proceedings of AAAI-02,
2002.
16. Gerhard Brewka, Ilkka Niemela, and Tommi Syrjanen. Implementing Ordered Disjunc-
tion Using Answer Set Solvers for Normal Programs. In Sergio Flesca and Giovanbattista
Ianni, editors, Proceedings of the 8th European Conference on Artificial Intelligence (JELIA
2002), Sep 2002.
17. Gerhard Brewka, Ilkka Niemela, and Tommi Syrjanen. Logic Programs wirh Ordered Dis-
junction. 20(2):335–357, 2004.
18. Francesco Buccafurri, Nicola Leone, and Pasquale Rullo. Adding Weak Constraints to Dis-
junctive Datalog. In Proceedings of the 1997 Joint Conference on Declarative Programming
APPIA-GULP-PRODE’97, 1997.
19. Francesco Calimeri, Tina Dell’Armi, Thomas Eiter, Wolfgang Faber, Georg Gottlob, Gio-
vanbattista Ianni, Giuseppe Ielpa, Christoph Koch, Nicola Leone, Simona Perri, Gerard
Pfeifer, and Axel Polleres. The DLV System. In Sergio Flesca and Giovanbattista Ianni, ed-
itors, Proceedings of the 8th European Conference on Artificial Intelligence (JELIA 2002),
Sep 2002.
20. Tina Dell’Armi, Wolfgang Faber, Giuseppe Ielpa, Nicola Leone, and Gerard Pfeifer. Ag-
gregate Functions in Disjunctive Logic Programming: Semantics, Complexity, and Imple-
mentation in DLV. In Proceedings of the 18th International Joint Conference on Artificial
Intelligence (IJCAI 03). Morgan Kaufmann, Aug 2003.
21. Yannis Dimopoulos, J. Koehler, and B. Nebel. Encoding planning problems in nonmono-
tonic logic programs. In Proceedings of the 4th European Conference on Planning, volume
1348 of Lecture Notes in Artificial Intelligence (LNCS), pages 169–181, 1997.
22. Thomas Eiter, Wolfgang Faber, Nicola Leone, Gerard Pfeifer, and Axel Polleres. Answer
Set Planning under Action Costs. Journal of Artificial Intelligence Research, 19:25–71,
2003.
23. J. J. Finger. Exploiting Constraints in Design Synthesis. PhD thesis, Stanford University,
1987.
40
24. Alfredo Gabaldon and Michael Gelfond. From Functional Specifications to Logic Pro-
grams. In Proceedings of the International Logic Programming Symposium (ILPS’97),
1997.
25. Michael Gelfond. Representing Knowledge in A-Prolog. In Antonis C. Kakas and Fariba
Sadri, editors, Computational Logic: Logic Programming and Beyond, Essays in Honour of
Robert A. Kowalski, Part II, volume 2408, pages 413–451. Springer Verlag, Berlin, 2002.
26. Michael Gelfond and Nicola Leone. Knowledge Representation and Logic Programming.
Artificial Intelligence, 138(1–2), 2002.
27. Michael Gelfond and Vladimir Lifschitz. The stable model semantics for logic program-
ming. In Proceedings of ICLP-88, pages 1070–1080, 1988.
28. Michael Gelfond and Vladimir Lifschitz. Classical negation in logic programs and disjunc-
tive databases. New Generation Computing, pages 365–385, 1991.
29. Michael Gelfond and Vladimir Lifschitz. Action Languages. Electronic Transactions on
AI, 3(16), 1998.
30. Michael Gelfond and Richard Watson. On methodology for representing knowledge in
dynamic domains. In Proc of the 1998 ARO/ONR/NSF/DARPA Monterey Workshop on
Engineering Automation for Computer Based Systems, pages 57–66, 1999.
31. Patrick J. Hayes and John McCarthy. Some Philosophical Problems from the Standpoint of
Artificial Intelligence. In B. Meltzer and D. Michie, editors, Machine Intelligence 4, pages
463–502. Edinburgh University Press, 1969.
32. Y. Huang, H. Kautz, and B. Selman. Control Knowledge in Planning: Benefits and Trade-
offs. In Proceedings of the 16th National Conference of Artificial Intelligence (AAAI’99),
pages 511–517, 1999.
33. M. Kaminski. A note on the stable model semantics of logic programs. Artificial Intelli-
gence, 96(2):467–479, 1997.
34. H. Kautz and B. Selman. The Role of Domain-Specific Knowledge in the Planning as
Satisfiability Framework. In Proceedings of AIPS’98, 1998.
35. Loveleen Kolvekal. Developing an Inference Engine for CR-Prolog with Preferences. Mas-
ter’s thesis, Texas Tech University, Dec 2004.
36. Vladimir Lifschitz. Action Languages, Answer Sets, and Planning, pages 357–373. The
Logic Programming Paradigm: a 25-Year Perspective. Springer Verlag, Berlin, 1999.
37. John McCarthy. Epistemological problems of artificial intelligence. In Proceedings of
IJCAI-77, pages 1038–1044, 1977.
38. Veena S. Mellarkod. Optimizing the Computation of Stable Models using Merged Rules.
Master’s thesis, Texas Tech University, May 2002.
39. Ilkka Niemela and Patrik Simons. Extending the Smodels System with Cardinality and
Weight Constraints, pages 491–521. Logic-Based Artificial Intelligence. Kluwer Academic
Publishers, 2000.
40. Ilkka Niemela, Patrik Simons, and Timo Soininen. Extending and implementing the stable
model semantics. Artificial Intelligence, 138(1–2):181–234, Jun 2002.
41. Monica Nogueira. Building Knowledge Systems in A-Prolog. PhD thesis, University of
Texas at El Paso, May 2003.
42. Monica Nogueira, Marcello Balduccini, Michael Gelfond, Richard Watson, and Matthew
Barry. An A-Prolog decision support system for the Space Shuttle. In PADL 2001, pages
169–183, 2001.
43. E. Pednault. ADL: exploring the middle ground between STRIPS and the situation cal-
culus. In Proceedings of the First International Conference on Principles of Knowledge
Representation and Reasoning (KR-89), pages 324–332, 1989.
44. Raymond Reiter. On Closed World Data Bases, pages 119–140. Logic and Data Bases.
Plenum Press, 1978.
45. Patrik Simons. Extending the Stable Model Semantics with More Expressive Rules. In Pro-
ceedings of the 5th International Conference on Logic Programming and Non-monotonic
Reasoning (LPNMR-99), number 1730 in Lecture Notes in Artificial Intelligence (LNCS).
Springer Verlag, Berlin, 1999.
46. Richard Watson. An application of action theory to the Space Shuttle. In PADL-99, volume
1551 of Lecture Notes in Artificial Intelligence (LNCS), pages 290–304, 1999.