Vous êtes sur la page 1sur 10

Towards Conflict Detection and Resolution of Safety Policies

M. Hall-May, T. P. Kelly; University of York; York, England

Keywords: systems of systems, safety, policy, conflict

Abstract

Safety policy sets out the rules that govern safe system interaction within a system of systems (SoS). These rules are
derived from high-level policy goals informed by a hazard analysis. They are expressed in terms of behaviour that
the system is allowed to exhibit and that which it is required to exhibit. Nevertheless, this process can lead to
describing rules that conflict with one another, both in obvious and subtle ways. The first challenge this presents is
detecting a conflict, or at least the potential for conflict, before it arises. Conflicts can be identified in part by
considering the context in which the system operates. The second challenge is how then to resolve such conflicts.
Several types of conflict exist, such as modality conflicts, authority conflicts and conflicts of resources. It is
necessary to classify and understand these conflict types before safety policy conflict detection and resolution can
become a systematic process.

Introduction

In large, heterogeneous, distributed societies of autonomous agents, such as a system of systems (ref. 1), individuals
operate according to a given set of rules – or policy. Policy guides the action of autonomous individuals or groups
according to some criteria. A statement of policy has been defined as “ a rule that defines a choice in behaviour of a
system” (ref. 2). By what criteria this choice is judged depends on the intent of the policy. Policies are typically
geared towards one of several objectives. For example, a safety policy describes how to protect the physical
integrity of a system; a security policy describes how to protect data integrity within a system, while a usage policy
describes the rights and privileges of users of a system. This paper has safety policies as its main focus (ref. 3).

Structure of the Paper

The following section discusses the role of policy decomposition and the difficulty in generating a consistent set of
policy rules from competing, and non-competing, goals. Policy conflicts are then categorised in terms of a
combination of syntactic, semantic, static and dynamic, and these terms are explained. There follows a study of
various types of policy conflict illustrated by examples drawn from the military and aerospace domains. Finally, the
paper discusses the issues of policy detection and resolution.

Policy Decomposition

Policy goals motivated by safety concerns must be decomposed in order to be implemented by agents (ref. 4). The
policy decomposition process is informed by a hazard analysis (ref. 5) as well as by models of the system from
several viewpoints. An agent viewpoint reveals the capabilities and inter-relationships between the agents; a domain
viewpoint describes the ontology of the system, i.e. the agents’ knowledge of the environment and of each other; a
causal viewpoint describes how various factors and variables influence each other within the SoS. The exact nature
of these models is beyond the scope of this paper, but is described in greater detail in reference 6. An agent
viewpoint is necessary in order to consider the types of communications that will occur between agents as well as
which agent relies on the services or knowledge of another. The domain viewpoint can help in understanding how
the misinterpretation of common real-world artefacts in local mental models can occur. A causal viewpoint is
particularly important to a safety policy decomposition because it prompts the policy-maker to consider the factors
that influence the observations used by an agent to decide its actions.

Decomposing policy goals forms a hierarchy of policies at different levels of abstraction. At their lowest level of
abstraction policy statements are defined in terms of rules that permit, forbid or oblige an agent to carry out an
action. In this way both the permissions and requirements on an agent are expressed. Clearly there is the potential
for these statements to specify contradictory rules on an agent. Deriving these policy statements such that they are
completely consistent is a significant challenge. Different high-level objectives, whilst not completely incompatible,
can lead to conflicts between low-level rules being derived.

Policy Conflict

Policies are set by different stakeholders, belonging to different organisations with different interests and
requirements, and are set at different times. A single system will typically find itself interacting with, and belonging
to, various groups of systems and hence will be subject to many disparate policies. One of the problems of
composing policies in this manner is the conflict that can arise from contradictory decisions produced by them. It
helps, however, to analyse the types of conflict that can arise in order better to understand them and hopefully to
formulate resolution strategies to avoid them.

Categories of Conflicts

Jajodia et al (ref. 7) identify conflicts that are either static or dynamic. Similarly, Dourish (ref. 8) classifies conflicts
as being either syntactic or semantic. In reality, policy conflicts can be described as a combination of both of these.

Static Dynamic
Syntactic Agent a is permitted to enter zone z Agent a is permitted to enter no-fly zone
Agent a is forbidden to enter zone z Agent a is forbidden to enter any zone that may
harbour undetected hostiles

Semantic Aircraft is permitted to enter no-fly zone Agent a is permitted to fire upon identified targets
Agents capable of flight are forbidden to enter Agent a is permitted to nominate targets
no-fly zone

Table 1 – Categories of Policy Conflict Types

Syntactic conflicts are those that can be identified through static analysis of the policy statements and are defined
purely in terms of the syntax of the policy. These types of conflicts occur irrespective of the state of the system
enforcing the policy and may be the result of specification errors in the policy, but may also be legitimately derived
from high-level goals, i.e. a case could be made for the presence of both rules even though they conflict.

In contrast, semantic conflicts are those that are defined by the particular application of the system. Semantic
conflicts are therefore ‘ application-specific’ , in that they rely on the particular application domain to provide the
semantics that specify whether two policies are in conflict or not. Detection of these kinds of conflict is much more
difficult since it is necessary to analyse the system in all possible states. This is especially difficult given the
dynamic nature of systems of systems, in that they can be dynamically reconfigured to use the same set of assets in a
different application. For the same reason, semantic conflicts are a particularly important class of conflict. Indeed,
Wies notes that the “ semantical interdependencies of policies will probably cause the majority of conflicts” (ref. 9).

Both syntactic and semantic conflicts can exist statically or occur dynamically. Static conflicts can be detected ‘ off-
line’ , i.e. without evaluating the policy in action. Dynamic conflicts, on the other hand, occur at the ‘ run-time’ of
the system, i.e. at the moment when the policy is being evaluated and enforced, and arise because a particular state
of the system results in a conflict between two or more policy statements. That is, if conditions or variables in policy
statements evaluate to certain values whilst the policy is on-line, this can cause a conflict between policies.

Table 1 shows four examples of policy conflicts in a matrix contrasting the various categories of conflict. A static
syntactic conflict is the simplest and most obvious of the four categories. The example given here details two
policies, one of which allows a specific agent, a, to enter a zone, z, and another that forbids precisely the same
action. Since the subject of both policy statements, agent a, and the target, zone z, are specified and are the same, it
is obvious that the policies are in conflict. However, in the dynamic syntactic conflict example the particular
forbidden zone is not named, but is instead specified by the noun-phrase “ any zone that may harbour undetected
hostiles” . In this case the target of the policy statement is dynamically evaluated at the moment it is enforced and, if
it is determined that the no-fly zone potentially harbours undetected hostiles, then the agent is both permitted and
forbidden from entering that zone. In this case, the policy is then equivalent to the static syntactic example. The
conflict has arisen dynamically, but is defined entirely by the syntax of the policy statements, not by the semantics
of the system.

In Table 1, there is also an example of a static semantic policy conflict. This means that the conflict exists without
putting the policy into operation, but is only detectable if we have some knowledge of the semantics of the system.
In this case, an agent that is an aircraft is permitted to enter the no-fly zone, but there also exists another policy
stating that agents capable of flight are forbidden from entering the no-fly zone. Taking ‘ aircraft’ and ‘ no-fly
zone’ as simple identifiers, they carry no extra meaning. Until we know that an aircraft is capable of flight, the two
policies are completely consistent. However, with some appreciation of the meaning of ‘ aircraft’ , in terms of its
capabilities and relationships with other agents and resources, it is possible to detect the conflict between the
policies. Similarly, a semantic conflict can occur dynamically, as in the example in Table 1. In this example, an
agent is permitted to fire on identified targets and also allowed to nominate targets. Again, eliding away the
semantics of the actions ‘ fire’ and ‘ nominate’ , there is no syntactic conflict between the two policies. However,
consideration of the meaning of these actions reveals the conflict between allowing a single agent to both nominate
and open fire on a target unilaterally.

Policy Conflict Types

It is possible to identify a number of types of policy conflict. There follows a study of various types of policy
conflict illustrated by examples drawn from the military and aerospace domains.

Positive/negative conflict: The simplest conflict is when an agent is both permitted and forbidden to perform an
action, or simultaneously required and not required to perform it. This can be either intentionally specified this way,
be a specification error or arise once the policy conditions have been evaluated at run-time. It might be difficult to
imagine how such directly conflicting policies can be specified, but given different objectives two contradictory
rules may legitimately be derived.

Example: An example of this type of conflict would be if an agent is both forbidden and permitted to enter a named
zone a. However, if the zone were not specified, but was evaluated at run time (e.g. the agent is forbidden from
entering an area that has not been cleared of enemies in the past 24 hours), then this could conflict with an existing
authorisation to enter that zone. Similarly, a policy allowing the agent to enter zone A could conflict with one
forbidding it to enter zone b, if b overlapped partially or totally with zone a.

Permission/obligation conflict: As with the direct positive/negative modality conflict, an agent can also be
simultaneously obliged but forbidden from performing a given action. Again, this can arise as a result of a
specification error or as an unanticipated combination of conditions at runtime. In a restrictive SoS (i.e. one where
everything that is not explicitly allowed is forbidden) it is more likely that it is a simple oversight in specifying the
permissions to match an agent’ s obligations. An understanding of what is involved in carrying out the obligations is
necessary when specifying the correct and corresponding permissions. The decomposition of policy only goes so far
and does not reach the level of implementation since this can vary from one system to another and also over time as
technologies change. The following subsection on deductive refinement deals with this problem in more detail

Example: A policy obliging an agent to report its position every hour, and another policy forbidding it from using
communications under certain circumstances. The second policy does not prohibit the agent explicitly from
reporting its position, but nevertheless restricts its options for doing so.

Deductive refinement conflict: While this is not a type of conflict in its own right, it is important to realise that
policies may not always obviously conflict with one another at the same level of specification. Often it is necessary
to logically follow through the implications of a certain policy in terms of the agent behaviour that the underlying
system will exhibit. It is the actions that are invoked in carrying out the policy that may be in conflict with other
policies. These low-level actions are not always part of the policy specification hierarchy, because implementation
of the policy actions may be system-specific or subject to change over time. Using an action hierarchy it is possible
to capture the sub-processes that are involved in various actions. Hence, a policy on an action must be analysed to
see if any of the actions entailed in its implementation are likely to be in conflict with other policies.

Example: In detecting the presence of enemy installations from altitude, a UAV1 may choose to use one of several
sensor mechanisms such as EO, IR or SAR2. If the sensor it chooses (or is equipped with) is active as opposed to
passive, this may conflict with a policy forbidding the emission of detectable signals. Such concerns need to be part
of the policy conflict analysis process, given that the exact implementation is not part of the policy definition.

Multiple managers: When an agent receives orders, instructions or advice from more than one agent that it is
obliged to follow, there is the danger that the agent may face conflicting information that it cannot reconcile. This is
due to the separate world views and local goals of the advising agents. It can be that a particular assignment of roles
to agents has led to roles that should be carried out by the same physical agent, so that it can internally reconcile any
conflicts between the demands of the roles, being assigned to separate agents. Indeed, various requirements may
mean that the roles have to be carried out by individual agents.

Example: A pilot receives advice not only from an air traffic controller (ATC) but also from a TCAS3 onboard the
aircraft, i.e. multiple managers. In the 2002 air collision over Lake Constance advice from ATC conflicting with a
TCAS advisory was found to be a cause. For full details, the interested reader should consult the accident report
(ref. 10). In this case, a clear resolution policy on which agent’ s advice to follow and when must be in place, or a
policy must be defined such that the agents reconcile any differences of opinion between themselves. The former
policy exists in various regulations, e.g. “ If pilots simultaneously receive instructions to manoeuvre from ATC and
an RA [TCAS Resolution Advisory] which are in conflict, the pilot should follow the RA” 4, but were found by the
report to be “ incomplete and misleading” . The latter policy clearly places greater requirements on the physical
systems, since it introduces communications that are not part of the original agent model. In this example it would
mean a ‘ downlink’ carrying data from TCAS to ATC so that the ATC operator could see any advisories given to
the pilot and thereby avoid presenting him with conflicting information. Such technology exists but has not yet been
introduced worldwide.

Self-management: In the case of self-management, role assignment leads to the situation in which an agent
effectively spans two or more levels of the authority hierarchy. The agent is subject to policies that have itself as
their target. It should be obvious when a conflict of self-management arises, however when specifying policy targets
in terms of roles it is all too easy to assign policies to the same agent. The agent model must be consulted when
developing policy, so as to avoid this type of conflict.

Example: Consider, instead of reporting its position to a separate authority, that an agent reported to itself.
Clearly, what was intended by this policy is for one agent to maintain a check on other agent positions. This
enables, for example, long-range artillery to be aware of other agent locations in order to avoid friendly-fire
accidents. In this instance, however, the agent has been assigned the role of maintaining a check on itself, thereby
negating any benefit from having a separate agent corroborate its reported position.

Conflict of interests: A conflict of interests occurs because an agent has multiple interests that it must service. Such
a conflict can arise as a result of various roles being assigned to the agent with conflicting responsibilities. Conflicts
of interest can also stem from conflicts between dependability attributes other than safety, such as availability,
security, reliability and performance (ref. 12).

1
Unmanned Aerial Vehicle
2
EO = Electro-Optical; IR = Infra-Red; SAR = Synthetic Aperture Radar
3
Traffic Alert and Collision Avoidance System
4
Note 3 in JAA TGL No. 11 (ref. 11)
Example: An agent is motivated to minimise coalition casualties through its actions. However, it is similarly
motivated to reduce the casualties of its own forces, which may present a conflict with saving an allied agent.
Moreover, there may be a policy requiring the agent to maintain its own safety at all costs, perhaps because it is not
expendable. This clearly conflicts with its other interests.

A specific instance of a conflict of interests is one in which knowledge gained from one action is used to influence
another action in a manner that is defined to conflict according to the application-specific semantics. In contrast to
the conflict of multiple managers, in which it is desirable for a single agent to be responsible for both actions in
order to resolve any conflicts autonomously using the knowledge it has available to it, a conflict of interests arises
when the agent uses this knowledge contrary to what is best for the global SoS. Obviously, there is an issue of trust
when considering how to resolve this conflict. Can the agent be trusted not to use the knowledge, or must the
responsibilities be partitioned into separate agents?

Conflict of duties: This conflict is a failure to ensure separation of duties. The duties in this case are actions on the
same resource or agent that are defined as conflicting according to the semantics of the application domain. The
separation of duties is very important in safety-critical systems, because it prevents agents acting unilaterally or on
unsubstantiated information. As with the conflict of interests, it is important to stop the actions of malicious agents
as much as the unintentionally harmful actions of otherwise trustworthy agents.

Example: An agent must not nominate and fire at the same target. It should be noted that the conflict does not exist
if the target that is nominated is not the same as the one that is engaged – the agent can legitimately nominate
targets for other agents whilst engaging other targets.

Dependence conflict: The potential for a dependence conflict arises when there exists a dependency between two
policies, either temporal or spatial. A temporal dependence conflict occurs when an agent is required to perform
actions out of the order in which they are required to be effective. A spatial dependence conflict means that an agent
or the target of its action is not in the correct position or physical state for its action to be effective.

Example: One policy obliges long-range artillery to check the positions of any friendly forces in the target area
before firing. Another policy obliges friendly agents, e.g. troop-carrying helicopters, in the area to call in their
position at regular intervals with theatre command. These two policies are temporally dependent and a conflict may
arise if the helicopters do not update their positions frequently enough.

Dependencies often arise because the policy goal has been decomposed into a linked support pattern, i.e. the sub-
goals interdependently support the parent goal (ref. 13). The alternative is a convergent support pattern, in which the
sub-goals independently support the parent goal. This has implications on considerations of risk in terms of the
importance of the policy goal when it is in conflict.

Example: Removing one policy goal from a linked support pattern causes the risk to increase to a level comparable
to removing all the policy goals in the pattern. For instance, if the artillery checks the helicopter locations before
firing but the helicopters do not make their current locations known, then the artillery’ s checking is futile.
Conversely, removing one policy goal from a convergent support pattern will still increase risk, but its effects will
be mitigated because of the other diverse policies still in effect. For instance, if the artillery no longer checks the
helicopter positions before firing but the helicopters are instead able to check the artillery targets then they may still
have a chance of avoiding friendly fire.

Conflict of resources: Conflicts of resources can either be internal to an agent or external, i.e. intra- or inter-agent
conflict. The assumption is made that resources are either finite in amount, or in some other way limited in the
number of agents that can use them simultaneously. An intra-agent conflict of resources means that an agent is
obliged to carry out multiple actions for which it does not have the time, energy, or other resource necessary to do.
An inter-agent conflict of resources involves two or more agents attempting to use a shared resource that cannot
sustain the demands put upon it by their actions.

Example: An intra-agent conflict of resources would be if a UAV must send an update of its sensor readings to
ground control and also communicate with the flock it is flying in so as to avoid crashing, but it does not have
enough battery power to do both. When resolving the conflict, it must be considered which of the actions is more
safety-critical (presuming that safety is an important objective to satisfy). An inter-agent conflict of resources would
be if two UAVs must both communicate over a channel of limited bandwidth, or fly in a tight air corridor. In the
latter, the airspace can be seen as a shared resource of finite amount.

Policy Conflict Detection

Syntactic conflicts can be analysed by simply considering the policies themselves. Positive/negative and
permission/obligation conflicts can be detected solely from the syntax of the policy. However, semantic conflicts
require a good understanding of the semantics of the system, which are represented by system models. Successful
detection of policy conflicts places a number of requirements on the model of the system. For instance, a conflict of
interests requires that an agent’ s motives be clearly expressed, i.e. it is clear when the policies that it is subject to
are motivated by competing local and global (system) goals. A conflict of resources requires that the resources
themselves be modelled, as well as how the agents use them. Multiple-managers and self-management conflicts
necessitate a model of the authority hierarchy.

During the development of agent behaviour, operational goals are assigned to roles, which are in turn apportioned to
agents. Many conflicts result from assigning conflicting goals to roles, or roles to agents. Mostly these include
conflicts of interests and duties. However, modality conflicts are likely to be as a result of specification errors if
they exist within a role. If the adoption of the conflicting roles happens at run-time, this is not a semantic conflict,
merely a dynamic instance of a syntactic error. Separation of duties is, however, defined by the semantics of the
application. Two actions may only be defined as being in conflict if performed by the same agent if the semantics of
the application describes it as such. To detect these types of conflict, richer domain knowledge is required beyond
what exists. Relationships between concepts in an ontology can be defined by predicates and can help to define
when two actions are in conflict.

Policy Conflict Resolution

Conflict between policies, if not detected and resolved, has the potential to cause indecision when choosing the
correct course of action. This indecision may lead to a hazardous state in the system and to the possibility of an
accident. However, some conflicts are natural and may be tolerated indefinitely. Indeed, Edwards (ref. 14) considers
conflicts to be a “ naturally-arising side-effect of the collaborative process” . The difficulty is in knowing which
conflicts are safety-critical in nature and which can be allowed to persist.

According to Moffett and Sloman (ref. 15), there are several opportunities to prevent or resolve conflicts. We use
their classification here to investigate the types of resolution available and their suitability for different classes of
policy conflict.

Language design: Using a structured approach to policy goal expression can mean that the vocabulary simply will
not allow inconsistent policies to be defined. Thus language design can rule out many syntactic conflicts by making
it impossible to express conflicting policies. Direct positive-negative modality conflicts are the most obvious and
easy to detect, but are very serious because there is no means of deciding at run-time what the original intention of
the policy-maker was, i.e. whether the action should be permitted or initiated or not. By forcing the policy-maker to
explicitly prioritise each policy rule when it is created, it obviates any indecision when choosing between two
conflicting policies. However, in practice ordering policy rules according to their importance in this manner rarely
results in the behaviour that one would expect. Nevertheless, assigning priorities to policy statements is the simplest
way of avoiding conflict between them. Hence there have been a number of suggestions for specifying which policy
rule has priority in a situation (refs. 7, 16-18):

• Absolute ordering. Policies are absolutely ordered by an explicit assignment of a priority level to each rule.
However, assigning meaningful priorities by considering the policies in isolation is prone to error and may
result in arbitrary priorities with no real relationship to the importance of the policies. Typically, there is also
a finite number of priority levels, with the inevitable result that policies assigned the same priority can still
come into conflict. The remainder of this list deals with the relative prioritisation of policies.
• Deny by default. Negative policies have priority over positive ones. This strategy is based on the assumption
that negative policies have a more benign effect on the system. Clearly, according to application-defined
semantics, this assumption may be flawed. The effect of denying an action when it is in conflict with another
policy should be considered before applying this resolution strategy. This technique can also be used in the
opposite sense to favour positive policies and thereby allow actions by default.
• Obsolescence. More recent policies take priority over older ones. This resolution strategy assumes that
policies defined or updated more recently are by consequence more ‘ up-to-date’ , and therefore relevant,
while older policies become obsolete.
• Specificity. A more specific policy overrides a more general policy. Implicit in this strategy is the assumption
that general policies use a broad brush to define the rights and privileges of as many agents as possible, while
more specific policies deal with special cases, which may intentionally contradict the ‘ background’ policy.
Of course, it is possible that a more specific policy that runs contrary to a more general policy is genuinely in
conflict with it. In this case, the strategy can be used in the opposite sense.
• Authority. A policy issued or defined by an agent of higher authority has priority. The assumption here is that
agents of authority have a more wide-ranging view of the situation and can make policies based on greater
knowledge. In reality, however, it is often the case that agents lower down in the authority hierarchy have
more specific knowledge of the situation and can set more accurate policies that should take priority.
• Privileges. The strongest right allowed has priority over weaker rights. In a conflict situation the policy that
allows an agent to do more takes precedence over other priorities. This strategy is typically found in security
applications.

Off-line detection and resolution: Certain kinds of conflict can be detected before the policy is put into operation.
The design of the policy language is also important for this, in that a suitably structured language can facilitate
automated type checking and consistency analysis in a way not possible on natural language policy specifications.
This method is suitable for detecting and resolving static conflicts, but can also detect the possibility of dynamically
arising syntactic conflicts. Given more knowledge of the application-specific semantics, such as which types of
actions are incompatible, static semantic conflicts can also be revealed.

Once detected, a decision can be made as to how to resolve the conflict. As mentioned above, prioritisation of
policies is a technique suitable for resolving these conflicts. However, semantic conflicts can be more subtle, hence
simple precedence relationships between policies may not be enough to resolve them. A strategy to resolve conflicts
such as conflict of interests and duties is to define a “ policy about policies” , or meta-policy. A meta-policy either
defines what policies can coexist without conflict, or specifies permitted attribute values such that policies that
would otherwise conflict are resolved. For example, the resolution meta-policy for the multiple managers conflict
between ATC and TCAS advice would specify that the pilot take advice from ATC, unless he is within 60 seconds
of a collision, in which case he should follow the TCAS advisory. Similarly, a meta-policy for the conflict of duties
that exists when the same agent can both nominate and engage a single target would prohibit two policies coexisting
that allowed both these actions.

On-line prevention: If the possibility of a run-time conflict has not been detected and resolved ahead of time using
the above-mentioned techniques, it falls to on-line prevention. Simulation can be used as an alternative to off-line
analysis of the policy. Simulating the systems in realistic scenarios can provide the means to exercise policy in order
to discover the dynamically occurring but unanticipated combinations of conditions that lead to policy conflicts (ref.
19).

On-line resolution: Conflicting actions that arise dynamically and are detected at the moment they occur are too late
to prevent. The offending action must therefore be handled gracefully. Resolution is likely to consist of passing
control to a human being, displaying a human-understandable warning or initiating a forward-recovery action (ref.
20).
Summary

Safety policy is motivated by safety goals, which arise from a hazard analysis. The safety goals must be
decomposed into individual agent responsibilities and permissions in order to be implemented. However, deriving a
consistent set of rules from competing, and even non-competing, objectives is a significant challenge. Policy
conflicts can be present in the specification of the policy rules or can occur at run-time through the evaluation of
policy conditions, i.e. they are either static or dynamic. This paper has also shown that conflicts can be categorised
as either syntactic or semantic, where the latter means that conflicts are not intrinsic but are defined as a violation of
some application-specific semantics. Conflicts can be internal, in that there are inconsistencies across a number of
policies with ostensibly the same goal, or external, where the policy conflicts with other goals, such as dependability
attributes, cost or other political considerations, e.g. fewer casualties.

A number of types of policy conflict can be identified, such as modality conflicts, authority conflicts and conflicts
of resources, each of which puts certain requirements on the system models used to derive policy if they are to be
successfully detected. Detection of dynamically occurring conflicts is challenging, because the system conditions
that can lead to conflict are not known prior to making the policy ‘ live’ . This paper suggests the use of simulation
to exercise policy in this way.

Once detected, a solution for resolving the conflict is necessary. This resolution can be made off-line, typically by
prioritising policy rules such that there is a precedence ordering over them. Assigning priorities is difficult, and this
paper suggests a number of considerations that must be made when carrying out this activity. Semantic policy
conflicts often require greater control than simple prioritisation can provide. Meta-policies describe which policies
can coexist without conflict or provide suitable attribute value bounds such that the policies are never in conflict.

Acknowledgement

This work is carried out under the High Integrity Real Time Systems Defence and Aerospace Research Partnership
(HIRTS DARP), funded by the MoD, DTI and EPSRC. The current members of the HIRTS DARP are BAE
SYSTEMS, Rolls-Royce plc, QinetiQ and the University of York.

References

1. R. Alexander, M. Hall-May, and T. Kelly. Characteristic failure modes in systems of systems. In


Proceedings of the 22nd International System Safety Conference, pages 499–508, Providence, Rhode
Island, Aug. 2004. System Safety Society.
2. N. Damianou, N. Dulay, E. Lupu, and M. Sloman. Managing security in object-based distributed systems
using Ponder. In Proceedings of the 6th Open European Summer School (Eunice 2000). Twente University
Press, Sept. 2000.
3. M. Hall-May and T. P. Kelly. Planes, trains and automobiles —an investigation into safety policy for
systems of systems. In Proceedings of the 23rd International System Safety Conference, San Diego, CA,
Aug. 2005.
4. M. Hall-May and T. P. Kelly. Defining and decomposing safety policy for systems of systems. In
Proceedings of the 24th International Conference on Computer Safety, Reliability and Security
(SAFECOMP ’ 05), volume 3688 of LNCS, pages 37–51, Fredrikstad, Norway, Sept. 2005. Springer-
Verlag.
5. R. Alexander and T. P. Kelly. Combining simulation with machine learning to build accident models. In
Proceedings of the Third International Workshop on Safety and Security in Multi-Agent Systems
(SASEMAS ’ 06), pages 1– 5, Hakodate, Japan, May 2006.
6. M. Hall-May and T. P. Kelly. Using agent-based modelling approaches to support the development of
safety policy for systems of systems. In J. Gorski, editor, Proceedings of the 25th International Conference
on Computer Safety, Reliability and Security (SAFECOMP ’ 06), LNCS, Gdansk, Poland, Sept. 2006.
Springer-Verlag.
7. S. Jajodia, P. Samarati, and V. S. Subrahmanian. A logical language for expressing authorizations. In
Proceedings of the 1997 IEEE Symposium on Security and Privacy, pages 31–42, Oakland, CA, USA,
May 1997. IEEE Press.
8. P. Dourish. Open Implementation and Flexibility in Collaboration Toolkits. PhD thesis, University
College, London, June 1996.
9. R. Wies. Using a classification of management policies for policy specification and policy transformation.
In A. S. Sethi, Y. Raynaud, and F. Fure-Vincent, editors, Proceedings of the IFIP/IEEE International
Symposium on Integrated Network Management, volume 4, pages 44–56, Santa Barbara, California, USA,
May 1995. Chapman & Hall.
10. German Federal Bureau of Aircraft Accidents Investigation. Investigation report AX001-1-2/02, 2004.
11. JAA. Temporary Guidance Leaflet No. 11: Guidance for Operators on Training Programmes for the Use of
Airborne Collision Avoidance Systems (ACAS), 1998.
12. G. Despotou, M. Hall-May, and T. P. Kelly. Eliciting safety policy and balancing with operational fitness
in systems of systems. In Proceedings of the 1st IEEE International Conference on System of Systems
Engineering, Apr. 2006.
13. T. Govier. A Practical Study of Argument. Wadsworth, Belmont, CA, 3rd edition, 1992.
14. W. K. Edwards. Flexible conflict detection and management in collaborative applications. In Proceedings
of the 10th Symposium on User Interface Software and Technology, pages 139–148, Banff, Alberta,
Canada, Oct. 1997. ACM Press.
15. J. D. Moffett and M. S. Sloman. Policy conflict analysis in distributed system management. Journal of
Organizational Computing, 4(1):1–22, 1993.
16. W. K. Edwards. Policies and roles in collaborative applications. In Proceedings of the Conference on
Computer-Supported Cooperative Work, pages 11–20, Cambridge, Massachusets, USA, 1996. ACM
Press.
17. E. C. Lupu and M. Sloman. Conflicts in policy-based distributed systems management. IEEE Transaction
on Software Engineering, 25(6):852–869, Nov. 1999.
18. C. N. Ribeiro, A. Zúquete, P. Ferreira, and P. Guedes. SPL: An access control language for security
policies with complex constraints. In Proceedings of the 8th Annual Symposium on Network and
Distributed System Security, pages 89–107, San Diego, California, USA, Feb. 2001.
19. R. Alexander, M. Hall-May, G. Despotou, and T. Kelly. Towards using simulation to evaluate safety policy
for systems of systems. In Proceedings of the 2nd International Workshop on Safety and Security in Multi-
Agent Systems (SASEMAS’ 05), pages 5–21, July 2005.
20. J. D. Moffett and J. A. McDermid. Policies for safety-critical systems: The challenge of formalisation. In
Proceedings of the 5th IFIP/IEEE International Workshop on Distributed Systems: Operations and
Management, Toulouse, France, Oct. 1994.
Biography

Martin Hall-May, Department of Computer Science, University of York, Heslington, York, YO10 5DD, UK,
telephone – +44 1904 432792, facsimile – +44 1904 432767, email – martin@cs.york.ac.uk

Martin Hall-May is a Research Associate at the University of York’ s Computer Science department. He joined the
High Integrity Systems Engineering (HISE) group in October of 2002. Martin is currently part of the Defence and
Aerospace Research Partnership in High Integrity Real Time Systems (DARP HIRTS) research project working
towards achieving the safety of emerging classes of systems. Previously he graduated from Bristol University in
2002 with a MEng (Hons) in Computer Science with Study in Continental Europe.

Dr Tim Kelly, Ph.D., Department of Computer Science, University of York, Heslington, York, YO10 5DD, UK,
telephone – +44 1904 432764, facsimile – +44 1904 432708, email – tpk@cs.york.ac.uk

Dr Tim Kelly is a Lecturer in software and safety engineering within the Department of Computer Science at the
University of York. He is also Deputy Director of the Rolls-Royce Systems and Software Engineering University
Technology Centre (UTC) at York. His expertise lies predominantly in the areas of safety case development and
management. His doctoral research focussed upon safety argument presentation, maintenance, and reuse using the
Goal Structuring Notation (GSN). Tim has provided extensive consultative and facilitative support in the production
of acceptable safety cases for companies from the medical, aerospace, railways and power generation sectors.
Before commencing his work in the field of safety engineering, Tim graduated with first class honours in Computer
Science from the University of Cambridge. He has published a number of papers on safety case development in
international journals and conferences and has been an invited panel speaker on software safety issues.

Vous aimerez peut-être aussi