Vous êtes sur la page 1sur 60

A DATA FLOW DIAGRAM EDITOR

FOR YOURDON NOTATION

By

Angkanit Pongsiriyaporn

SIU SS: SOT-MSIT-2007-02


A DATA FLOW DIAGRAM EDITOR
FOR YOURDON NOTATION

A Special Study Presented

By

Angkanit Pongsiriyaporn

Master of Science in Information Technology


School of Technology
Shinawatra University

October 2007

Copyright of Shinawatra University


Acknowledgments

First of all, I would like to express my sincere thanks to Asst.Prof.Dr. Paul


Andrew James Mason, my Special Study Advisor, for his support, encouragement and
valuable suggestions. He has been a great teacher who gives his time (always
welcoming) and energy to support all his students. And, I would like to convey my
thankfulness to my committee members, Dr.Md Maruf Hasan and Asst.Prof.Dr.
Chakguy Prakasvudhisarn for their kindly comments and useful advice. They are very
nice teachers.
Moreover, I wish to express my heartfelt thanks to my family, especially to
my dad and mom, who always beside me and give me more power to go ahead. And,
give thankful to my boss and my, especially to Mr. Santichai Emyoo Former Director,
Office of Computer Clustering Promotion or CCP who gave me the opportunity to
study here meanwhile I have been working at CCP.

i
Abstract

Kotonya and Summerville (1998) contend that a requirement method should


have certain properties that will enable it to address the difficult problem of
establishing a complete, correct and consistent set of requirements for a software
system. But there is no ‘ideal’ requirement method therefore a number of methods
attempt to overcome this deficiency by using a variety of modeling techniques to
formulate requirements. The CASE tool is essential if a method is to have any
realistic hope of scaling up for industrial use and thus entering the mainstream of
software engineering. This special study aims to develop CASE tool for supporting
the Data Flow Models which is one of modeling technique for capture and describe
how data is processed at different stages in the system. But it is true that there are
already commercial tools available supporting Data Flow Diagrams, as there are for
the variety other notations used by practitioners in modeling, analyzing and building
computer system. It is a lack of well-defined approaches to integration of CASE tool
that leads to inconsistencies and limits traceability between their respective data sets.
This special study will therefore build on previous work towards an integrated
CASE tool framework for software/systems engineering which is MAST (Meta-
modeling Approach to System Traceability) that introduce the mechanics of the
framework it self, before going on to describe integrated CASE tools for range of
notations. We concentrate here on developing the Data Flow Diagrams tool that can
be accommodated within MAST.

Keywords: Software engineer


Traceability
Data flow diagrams

ii
Table of Contents

Title Page

Acknowledgments i
Abstract ii
Table of Contents iii
List of Figures v

Chapter 1 Introduction 1
1.1 Background 1
1.2 Objectives and Scope 3

Chapter 2 Literature Review 6


2.1 Introduction 6
2.2 Data Flow Modeling 6
2.2.1 Structured analysis 9
2.2.2 Yourdon structured analysis 10
2.2.3 Extensions to the DFD for real-time systems 24
2.3 Keynotes Papers 26
2.3.1 Modern structured analysis 26
2.3.2 Requirements engineering: a road map 26
2.3.3 An analysis of the traceability problem 27
2.3.4 No silver bullet – essence and accident of software engineer 28
2.3.5 Dependable system 28
2.4 Summary 29

Chapter 3 Tool Support for Yourdon Data Flow Diagram Methodology 30


3.1 Introduction 30
3.2 Development Process 30
3.3 Requirements 31
3.4 Design 33
3.5 Summary 35

iii
Chapter 4 Case Studies Yourdon DFD Tool 36
4.1 Introduction 36
4.2 Data Flow Diagram for Automatic Telling Machine (ATM) 36
4.3 Data Flow Diagram for Mail Order Company 39
4.4 Summary 45

Chapter 5 Conclusions and Future Work 46


5.1 Introduction 46
5.2 Summary and Conclusions 46
5.3 Future Work 46
5.3.1 Formal specification of Yourdon CASE tool 47
5.3.2 Extensions to the tool 47
5.3.3 Tool enhancements 47
5.3.4 Further evaluation using industrial case studies 47
5.4 Lessons Learned 48

References 49

Biography 51

iv
List of Figures

Title Page

Figure 2.1 Data Flow Diagram Notation 7


Figure 2.2 The Context-Level Data-Flow Diagram for Issue Library System 7
Figure 2.3 Level 1 of the Data-Flow Diagram for the Issue Library System 8
Figure 2.4 An Example of a Process 10
Figure 2.5 An Example of a Flow 11
Figure 2.6 A Typical DFD 12
Figure 2.7 Graphical Representation of a Store 12
Figure 2.8 A Necessary Store 13
Figure 2.9 An Implementation Store 14
Figure 2.10 The Implementation Store Removed 14
Figure 2.11 Graphical Representation of a Terminator 16
Figure 2.12 An Example of an Infinite Sink 18
Figure 2.13 An Example of a Miracle Process 18
Figure 2.14 A Legitimate Case of a Write-only Store 19
Figure 2.15 A Complex Data Flow Diagram 19
Figure 2.16 Leveled Data Flow Diagrams 20
Figure 2.17 A Balanced DFD Fragment 22
Figure 2.18 An Unbalanced DFD Fragment 23
Figure 2.19 Showing Stores at Lower Levels 24
Figure 2.20 A DFD with Control Flows and Control Processes 25
Figure 3.1 Waterfall Model of Software Process for Yourdon DFD Tool 30
Figure 3.2 Class Diagram for Yourdon CASE Tool 33
Figure 3.3 Sequence Diagram for Yourdon CASE Tool 34
Figure 3.4 User Interface Design for Yourdon CASE Tool 34
Figure 4.1 Context Diagram for ATM 38
Figure 4.2 Overview Diagram for ATM 39
Figure 4.3 Context Diagram for Mail Order Company 42

v
Figure 4.4 Overview Diagram for Mail Order Company 43
Figure 4.5 Answer Enquiry Sub-Process 44
Figure 4.6 Process Order Sub-Process 44
Figure 4.7 Stock Control Sub-Process 45

vi
Chapter 1
Introduction

In this chapter, we introduce the background, objectives and scope of the


project.

1.1 Background
Requirements refer to the needs of the users. They include why the system is
to be developed, what the system is intended to accomplish and what constraints
(ranging from performance and reliability, to cost and design restrictions) are to be
observed.
The contribution of sub-standard requirements to systems that are delivered
late, over budget and which don’t fulfil the needs of users is well-known (Boehm,
1981). Traditionally, the requirements phase of a project was seen as little more than a
front-end to development and as a result was not accorded the same degree of
precision as down-stream activities.
The process of formulating, structuring and modelling requirements is
normally guided by a requirements method. That is, a systematic approach to
discovering, documenting and analysing requirements. Each method will normally
have an associated notation that provides a means of expressing the requirements.
Kotonya and Sommerville (1998) contend that a requirements method should
have certain properties that will enable it to address the difficult problem of
establishing a complete, correct and consistent set of requirements for a software
system. These properties include the following:
• Suitability for agreement with the end-user: the extent to which the
notation is understandable to someone with without formal training.
• Precision of definition of notation: the extent to which requirements may
be checked for consistency and correctness using the notation.
• Assistance with formulating requirements: a requirements method must
enable the capture, structuring and analysis of many idea, perspectives and
relationships at varying levels of detail.
• Definition of the world outside: a requirements model is incomplete unless
it models the environment with which a system interacts.

1
• Scope for malleability: requirements are built gradually over time and
continue to evolve throughout a systems’s entire lifecycle. The approach
used and resultant specification must therefore be capable of tolerating
incompleteness and be adaptable to change.
• Scope for integrating other approaches: no one requirements approach can
adequately articulate all requirements for a system. It is therefore
important a requirements method can support the incorporation of other
modelling techniques to allow their complementary strengths to be brought
to bear on a problem.
• Scope for communication: the requirements process is a human endeavour,
so a method needs to be able to support the need for people to
communicate their ideas and obtain feedback.
• Tool support: system development generates a large amount of
information that must be analyzed. A tool imposes consistency and
efficiency on the requirements process.

Unfortunately, there is no ‘ideal’ requirement method. This is because few (if


any) methods possess all the above attributes. Consequently a number of methods
attempt to overcome this deficiency by using a variety of modelling techniques to
formulate requirements. The system model can be enriched by modelling different
aspects of it using modelling techniques that capture and describe those aspects best.
• Compositional Models: Entity Relationship models may be used to show
how some entities are composed of other entities.
• Classification Models: Object/Inheritance models can be used to show hoe
entities have common attributes.
• Stimulus-Response Models: State Transition Diagrams may be used to
show how the system reacts to internal and external events.
• Process Models: may be used to show how the principal activities and
deliverables involved in carrying out some activities.
• Data-Flow Models: Data Flow Diagrams can be used to show how data is
processed at different stages in the system.

2
1.2 Objectives and Scope
This special study is concerned with the last type of model listed above,
namely Data Flow Models. Data Flow modelling is based on the notion that systems
can be modelled as a visualization of the data interaction that the overall system (or
part of it) has with other activities, whether internal or external to the system. Though
their use has diminished somewhat in recent years with the emergence of object
oriented approaches (and the defacto standard Unified Modelling Language, or UML
(Fowler & Scott, 1999), in particular), they continue to be used in one form or
another1 in the aersopace and defence industries in particular.
As the final entry in the desired list of method properties (outlined in sub-
section 1.1) shows, CASE tool support is essential if a method is to have any realistic
hope of scaling up for industrial use and thus entering the mainstream of software
engineering (and area in which formal or mathematical methods for example are often
said to fall short). And while it is true that there are already commercial tools
available supporting Data Flow Diagrams, as there are for the numerous other
notations used by practitioners in modelling, analyzing and building computer
systems2, therein lies the problem!
A lack of well-defined approaches to integration of CASE tools currently
leads to inconsistencies and limits traceability between their respective data sets.
Traceability is the common term for mechanisms to record and navigate relationships
between artifacts produced by development and assessment processes. Effective
management of these relationships is critical to the success of projects involving the
development of complex computer-based systems3.

1
RTN-SL (Paynter, Armstrong, & Haveman 2000) for example is a graphical language for defining
both the functions - termed activities - and behavior of Real-Time systems. Activities are concurrent
processes that exchange information and synchronize through shared data in their connections. RTN-
SL is widely used in the development of software controlling missile systems.
2
These include use case diagrams that help in establishing requirements, to (often specialized) design
notations (such as RTN-SL), to computer languages such as Java and Ada; for dependable systems
Failure Modes and Effects Analysis may be used to express failure behavior of functions and
components, while the safety case - arguments and evidence supporting claims of trustworthiness - may
be expressed as Goal Structures.
3
Traceability data can be used for many purposes, including helping verify a system meets its
requirements, tracking the effects of changes in requirements, understanding the evolution of an artifact
and determining the rationale underpinning design and implementation (Spanoudakis, 2004).

3
This lack of integration means ultimately that either engineers choice of tools
and hence notations and techniques is compromised (i.e., they concede to use only
those whose tools can be integrated), or they use their preferred assortment with
poorly integrated tools and risk compromising (i.e., impairing4) quality and their
overall ability to manage a project.
In an effort to overcome the tool integration problem, some vendors have
taken to producing integrated suites of tools, normally supporting a particular
software development process. Worthy examples include IBM’s Rational software
suite (tailored to the Rational Unified Process (Krutchten, 2000) which while offering
lifecycle traceability with ease across artifacts expressed in UML, linking and
navigating to notations expressed in other tools is far from straightforward. This is
particularly significant for developers of dependable systems who must model for
example, real-time and failure behaviour of a system, aspects not easily represented or
beyond the scope of the UML.
Our position is therefore this; at present, it is not unreasonable that engineers
should demand the freedom to select and use the techniques, methodologies and
notations they wish to use (when they want to use them) and not to have the
engineering process driven (and therefore compromised by) the software tools
available.
This special study will therefore build on previous work (Mattayon, 2007;
Tianvorakoon, 2006) undertaken in the School of Technology at Shinwatra University
towards an integrated CASE tool framework for software/systems engineering. The
framework known as MAST (Meta-modelling Approach to System Traceability)
features in a number of published works (Mason, 2006; Mason & Mattayom, 2007;
Mason & Tanvorakoon, 2006) which introduce the mechanics of the framework itself,
before going on to describe integrated CASE tools for a range of notations. Readers
are referred to these references for more information. We concentrate here on
developing a Data Flow Diagram tool that can be accommodated within MAST.

4
For example, this author knows of a situation where a system architecture was potentially impaired by
engineers ‘shoehorning’ the original design into a flatter model to accommodate their tools’ - or more
precisely its underlying SQL database - (then) inability to handle recursion.

4
The rest of this report is therefore organized as follows: Chapter 2 provides a
review of significant literature on data-flow modelling, detailing in particular the
Yourdon approach to DFD construction. Chapter 3 describes and demonstrates the
author’s work towards development of an appropriate MAST compatible tool; this
work is demonstrated by example in Chapter 4 using various case studies. Chapter 5
offers some conclusions and ideas for future work.

5
Chapter 2
Literature Review

2.1 Introduction
As indicated in Chapter One, the process of eliciting, structuring and
formulating software requirements is normally guided by a method, Data-Flow
Models being one such example.
In this chapter, we introduce the notion of Data-Flow Diagram methods,
concentrating in particular on the Yourdon notation for which tool support is to be
developed. Reader attention is drawn to other Requirements Engineering
methodologies, notably object-oriented methods, such as the Unified Modelling
Language or UML (ibid.) which models a system as a group of interacting objects and
also formal methods such as Z (Spivey, 1989) which are mathematically-based
techniques for the specification, development and verification of software. However
discussion on such topics is beyond the scope of this report.
The chapter also introduces a number of ‘keynote’ works that formed vital
background reading for the project. These provide a broad context to the work
presented.

2.2 Data-Flow Modelling


The dataflow diagram is one of the most commonly used systems-modeling
tools, particularly for operational systems in which the functions of the system are of
great importance and more complex than the data the system manipulates.
Recall, the Data-Flow model is based on the notion that systems can be
modelled as a visualization of the data interaction that the overall system, or part of it,
has with other activities, whether internal or external to the system.
Data-Flow approaches use Data-Flow Diagrams (DFDs) to graphically
represent the external entities, processes, data-flow and data-stores. Interconnections
between graphical entities are used to show the progressive transformation of data.
Figure 2.1 shows the Data-Flow notation first proposed by Tom De Marco (DeMarco,
1979) and later refined by Yourdon (1988). In turn, the notation had been borrowed
from earlier papers on graph theory, and it continues to be used as a convenient

6
notation by software engineers concerned with direct implementation of models of
user requirements.

Transform

Input Output Terminator

Data dictionary

Figure 2.1 Data Flow Diagram Notation

A DFD is composed of data on the move, shown as a named arrow;


transformations of data into other data, shown as named bubbles; sources and
destinations of data, shown as rectangles; and data in static storage, shown as parallel
lines. Transformations form the basis for further functional decomposition.
The first DFD derived by the analyst represents the ‘target’ system at its
context level. The example in Figure 2.2 represents a simplified context level Data-
Flow Diagram for a library system to automate the issue of library items. The aim of
the context diagram in data-flow analysis is to view the system as an unexplored
‘black-box’, and to direct the focus of investigation to the type of data-flows that enter
the system from the source and those that travel from the system to their destination.

Library card
Return date Library
Requested item Issue library
Library item assistant
user
Issued item

Figure 2.2 The Context-Level Data-Flow Diagram for Issue Library System

The creation of the context level Data-Flow Diagram also permits the
developer to sketch the boundaries of the target system, so that the client and
developer can agree on the scope of the system to be developed.

7
User database

user details
update details
Library
card update
Check
user
user
details
UserID
Library User status
user
requested
ItemID
item Check
Item
item
status
issued item return date
Issue Library
item assistant

item details
Update details

Item database

Figure 2.3 Level 1 of the Data-Flow Diagram for the Issue Library Item

The next level (level 1) of the data-flow diagram for the library system is
constructed by decomposing the top-level system bubble into sub-functions. Figure
2.3 shows the next level of decomposition of the library system. Each function in
Figure 2.3 represents a potential subsystem through further decomposition. Note
figures 2.1 through 2.3 are for reader-orientation only at this stage. A more detailed
explanation follows in sub-section 2.3.
It can be seen from the above that the notation is very simple and requires very
little explanation; one can simply look at the diagram and understand it. This is
particularly important when we consider who is supposed to be looking at it – i.e.; not
only the analyst, but also the end user!
An important point to note is the lack of uniformity in industry concerning the
DFD notation. Whereas the above figures introduce the highly popular convention
featured in work by both DeMarco and Yourdon (and the focus of this particular
report), variations employ rectangles or rounded rectangles for bubbles, ellipses or
shadowed rectangles for sources and destinations, and squared-off C’s for data stores.
The differences are however purely cosmetic.

8
2.2.1 Structured analysis.
The data flow approach is typified by the structured analysis method (SA).
The structured analysis method has undergone several refinements since its
introduction in the late 1970s, although its underlying principles have remained the
same. Two major strategies dominate structured analysis; the ‘old’ method
popularised by DeMarco (1979) and Yourdon (1988).

1) DeMarco’s approach
DeMarco favours a top-down approach in which the analyst maps the current
physical system onto the current logical data-flow model. The approach can be
summarized in four steps:
• analysis of current system
• derivation of logical model
• derivation of proposed logical model
• implementation of new physical system

The objective of the first step is to establish the details of the current physical
system, then to abstract from those details the logical data-flow model. The derivation
of the proposed logical model involves modifying the logical model to incorporate
logical requirements. Implementation of the proposed model can be considered as
arriving at a new physical system design.

2) Modern structured analysis


Expanding on DeMarco’s work, the modern approach first appeared in articles
and books by MacMenamin and Palmer around 1984. Other well known structured
analysis techniques include: Structured Analysis and Design Technique (SADT)
developed by Ross and Schoman (1977); Structured Requirements Definition (SRD)
developed by Orr (1981) and Structured Systems Analysis and Design Methodology
(SSADM) developed by Eva (1994).
Up until the mid-1980s structured analysis had focussed on information
systems applications and did not provide an adequate notation to address the control
and behavioural aspects of real-time systems engineering problems. Ward and Mellor
(1985) and later Hatley and Pirbhai (1987) introduced real-time extensions into

9
structured analysis. These extensions resulted in a more robust analysis method that
could be applied effectively to engineering problems.

2.2.2 Yourdon structured analysis.


In this subsection we provide a detailed description of the Yourdon structured
analysis notation, including an outline of the components of Yourdon Data-Flow
Diagrams, a primer on how to draw a simple Data-Flow Diagram, guidelines for
drawing successful Data-Flow Diagrams and instructions on Data-Flow Diagram
levelling.

1) The process
The first component of the DFD is known as a process. Common synonyms
are a bubble, a function, or a transformation. The process shows a part of the system
that transforms inputs into outputs; that is, it shows how one or more inputs are
changed into outputs. The process is represented graphically as a circle, as shown in
Figure 2.4.

Figure 2.4 An Example of a Process

Note that the process is named or described with a single word, phrase, or
simple sentence. A good name will generally consist of a verb-object phrase such as
VALIDATE INPUT or COMPUTE TAX RATE.

2) The flow
A flow is represented graphically by an arrow into or out of a process; an
example of flow is shown in Figure 2.5. As indicated above, the flow is used to
describe the movement of chunks, or packets of information from one part of the
system to another part.

10
Figure 2.5 An Example of a Flow

For most of the systems, the flows will represent data, that is, bits, characters,
messages, floating point numbers, etc. But DFDs can also be used to model systems
other than automated, computerized systems; we may choose, for example, to use a
DFD to model an assembly line in which there are no computerized components. In
such a case, the packets or chunks carried by the flows will typically be physical
materials. The flows are of course named. The name represents the meaning of the
packet that moves along the flow.
It is important to remember that the same content may have a different
meaning in different parts of the system. For example, consider the fragment of a
system shown in Figure 2.6. The same chunk of data (e.g., 089-410-9955) has a
different meaning when it travels along the flow labelled PHONE-NUMBER than it
does when it travels along the flow labeled VALID-PHONE-NUMBER. In the first
case, it means a telephone number that may or may not turn out to be valid; in the
second case, it means a phone number that, within the context of this system, is
known to be valid.

Figure 2.6 A Typical DFD

11
Note also that the flows show direction: an arrowhead at either end of the flow
(or possibly at both ends) indicates whether data (or material) are moving into or out
of a process (or doing both). The flow shown in Figure 2.6, for example, clearly
shows that a telephone number is being sent into the process labelled VALIDATE
PHONE NUMBER.

3) The store
The store is used to model a collection of data packets at rest. The notation
Yourdon uses for a store is two parallel lines, as shown in Figure 2.7. Typically, the
name chosen to identify the store is the plural of the name of the packets that are
carried by flows into and out of the store.

Figure 2.7 Graphical Representation of a Store

For the systems analyst with a data processing background, it is tempting to


refer to the stores as files or databases (e.g., a disk file organized with Oracle, DB2,
Sybase, Microsoft Access, or some other well-known database management system).
Indeed, this is how stores are typically implemented in a computerized system; but a
store can also be data stored on punched cards, microfilm, microfiche, or optical disk,
or a variety of other electronic forms.
Aside from the physical form that the store takes, there is also the question of
its purpose: does the store exist because of a fundamental user requirement, or does it
exist because of a convenient aspect of the implementation of the system? In the
former case, the store exists as a necessary time-delayed storage area between two
processes that occur at different times. For example, Figure 2.8 shows a fragment of a
system in which, as a matter of user policy (independent of the technology that will be
used to implement the system), the order entry process may operate at different times
(or possibly at the same time) as the order inquiry process. The ORDERS store must
exist in some form, irrespective of media type.

12
ORDER DETAILS INQUIRY
ORDER ORDER

ENTER RESPONSE
ORDERS TO
ORDERS INQUIRY

ACKNOWLEDGMENT RESPONSE

Figure 2.8 A Necessary Store

Figure 2.9 shows a different kind of store: the implementation store. We might
imagine the systems designer interposing an ORDERS store between ENTER ORDER
and PROCESS ORDER because:
• Both processes are expected to run on the same computer, but there isn’t
enough memory (or some other hardware resource) to fit both processes at
the same time (less likely nowadays, but still possible). Thus, the ORDERS
store has been created as an intermediate file, because the available
implementation technology has forced the processes to execute at different
times.
• Either or both of the processes are expected to run on a computer hardware
configuration that is somewhat unreliable. Thus, the ORDERS store has
been created as a backup mechanism in case either process aborts.
• The two processes are expected to be implemented by different
programmers (or perhaps, different groups of programmers working in
different geographical locations). Thus, the ORDERS store has been
created as a testing and debugging facility so that, if the entire system
doesn’t work, both groups can look at the contents of the store to see
where the problem lies.

13
ORDER DETAILS

ORDER ORDER

ENTER RESPONSE
ORDERS TO
ORDERS INQUIRY

INVALID ORDER RESPONSE

Figure 2.9 An Implementation Store

If we were to exclude the issues and model only the essential requirements of
the system, there would be no need for the ORDERS store; we would instead have a
DFD like the one shown in Figure 2.10.

ORDER DETAILS

ENTER RESPONSE
ORDER TO RESPONSE
ORDERS INQUIRY

INVALID ORDER

Figure 2.10 The Implementation Store Removed

As we have seen in the examples so far, stores are connected by flows to


processes. Thus, the context in which a store is shown in a DFD is one (or both) of the
following):

• A flow from a store


• A flow to a store

14
In most cases, the flows to and from a store will be labelled although some
analysts prefer to omit labels to and from stores.
A flow from a store is normally interpreted as a read or an access to
information in the store. Specifically, it can mean that:
• A single packet of data has been retrieved from the store; this is, in fact,
the most common example of a flow from a store. Imagine, for example, a
store called CUSTOMERS, where each packet contains name, address,
and phone number information about individual customers. Thus, a typical
flow from the store might involve the retrieval of a complete packet of
information about one customer.
• More than one packet has been retrieved from the store. For example, the
flow might retrieve packets of information about all the customers from
New York City from the CUSTOMERS store.
• A portion of one packet from the store. In some cases, for example, only
the phone number portion of information from one customer might be
retrieved from the CUSTOMERS store.
• Portions of more than one packet from the store. For example, a flow
might retrieve the zip-code portion of all customers living in the state of
New York from the CUSTOMERS store.

A flow to a store is often described as a write, an update, or possibly a delete.


Specifically, it can mean any of the following things:
• One or more new packets are being put into the store. Depending on the
nature of the system, the new packets may be appended (i.e., somehow
arranged so that they are “after” the existing packets); or they may be
placed somewhere between existing packets. This is often an
implementation issue (i.e., controlled by the specific database management
system) in which case the systems analyst ought not to worry about it. It
may, however, be a matter of user policy.
• One or more packets are being deleted, or removed, from the store.
• One or more packets are being modified or changed. This may involve a
change to all of a packet, or (more commonly) just a portion of a packet, or
a portion of multiple packets.

15
In all these cases, it is evident that the store is changed as a result of the flow
entering the store. It is the process (or processes) connected to the other end of the
flow that is responsible for making the change to the store.

4) The terminator
A terminator is graphically represented as a rectangle, as shown in Figure
2.11. Terminators represent external entities with which the system communicates.
Typically, a terminator is a person or a group of people, for example, an outside
organization or government agency, or a group or department that is within the same
company or organization, but outside the control of the system being modeled. In
some cases, a terminator may be another system, for example, some other computer
system with which the target system will communicate.

Figure 2.11 Graphical Representation of a Terminator

There are three important things that we must remember about terminators:
• They are outside the system being modeled; the flows connecting the
terminators to various processes (or stores) in a system represent the
interface between the target system and the outside world.
• As a consequence, neither the systems analyst nor the systems designer are
in a position to change the contents of a terminator or the way the
terminator works. That is the systems analyst is modeling a system with
the intention of allowing the systems designer a considerable amount of
flexibility and freedom to choose the best (or most efficient, or most
reliable, etc.) implementation possible. The systems designer may
implement the system in a considerably different way than it is currently
implemented; the systems analyst may choose to model the requirements
of the system in such a way that it looks considerably different than the
way the user mentally imagines the system now. But the systems analyst

16
cannot change the contents, or organization, or internal procedures
associated with the terminators.
• Any relationship that exists between terminators will not be shown in
the DFD model. There may indeed be several such relationships, but,
by definition, those relationships are not part of the system we are
studying. Conversely, if there are relationships between the
terminators, and if it is essential for the systems analyst to model those
requirements in order to properly document the requirements of the
system, then, by definition, the terminators are actually part of the
system and should be modelled as processes.

5) Guidelines for constructing DFDs


In the preceding section, it was shown that dataflow diagrams are composed of
four components: processes (bubbles), flows, stores, and terminators. Simple!
However, there are a number of additional guidelines that you need in order to use
DFDs successfully. These include the following:
• Choose meaningful names for processes, flows, stores, and terminators.
• Number the processes.
• Avoid overly complex DFDs.
• Make sure the DFD is internally consistent and consistent with any
associated DFDs.

It is important to check completed diagrams for so-called black holes (infinite


sinks) and miracle processes (spontaneous generation); see Figures 2.12 and 2.13
respectively. That is data going into a process, but no output is produced, and data
created as an output from a process with no inputs.

17
Figure 2.12 An Example of an Infinite Sink

Figure 2.13 An Example of a Miracle Process

Analysts should also be aware of read-only or write-only stores. This guideline


is analogous to the guideline about input-only and output-only processes; a typical
store should have both inputs and outputs. The only exception to this guideline is the
external store, a store that serves as an interface between the system and some
external terminator (Figure 2.14).

18
Figure 2.14 A Legitimate Case of a Write-only Store

6) Levelled DFDs
Thus far in this chapter, we have concentrated on simple DFDs. However, real
projects are often very large and complex. Section 2.2.3.5 has already suggested that
we should avoid overly complex diagrams (such as Figure 2.15). But how? If the
system is intrinsically complex and has perhaps hundreds of functions to model, how
can such complexity be avoided.

Figure 2.15 A Complex Data Flow Diagram

19
The answer is to organize the overall DFD in a series of levels so that each
level provides successively more detail about a portion of the level above it. This is
analogous to the organization of maps in an atlas; we would expect to see an overview
map that shows us an entire country, or perhaps even the entire world; subsequent
maps would show us the details of individual countries, individual states within
countries, and so on. In the case of DFDs, the organization of levels is shown
conceptually in Figure 2.16.
The top-level DFD – known as the Context Diagram - consists of only one
bubble, representing the entire system; the dataflows show the interfaces between the
system and the external terminators (together with any external stores that may be
present, as illustrated by Figure 2.14).

Figure 2.16 Leveled Data Flow Diagrams

20
The DFD immediately beneath the context diagram is known as Figure 0 or
Overview diagram. It represents the highest-level view of the major functions within
the system, as well as the major interfaces between those functions. As discussed in
Section 2.2.3.5, each of these bubbles should be numbered for convenient reference.
The numbers also serve as a convenient way of relating a bubble to the next
lower level DFD which more fully describes that bubble. For example:
• Bubble 2 in Figure 0 is associated with a lower-level DFD known as
Figure 2. The bubbles within Figure 2 are numbered 2.1, 2.2, 2.3, and so
on.
• Bubble 3 in Figure 0 is associated with a lower-level DFD known as
Figure 3. The bubbles within Figure 3 are numbered 3.1, 3.2, 3.3, and so
on.
• Bubble 2.2 in Figure 2 is associated with a lower-level DFD known as
Figure 2.2. The bubbles within Figure 2.2 are numbered 2.2.1, 2.2.2, 2.2.3,
and so on.
• If a bubble has a name (which it should have!), then that name is carried
down to the next lower level figure. Thus, if bubble 2.2 is named
COMPUTE SALES TAX, then Figure 2.2, which partitions bubble 2.2
into more detail, should be labeled “Figure 2.2: COMPUTE SALES
TAX.”

But how do you ensure that the levels of DFDs are consistent (balanced) with
each other? The issue of consistency turns out to be critically important, because the
various levels of DFDs are typically developed by different people in a real-world
project; a senior systems analyst may concentrate on the context diagram and Figure
0, while several junior systems analysts work on Figure 1, Figure 2, and so on. To
ensure that each figure is consistent with its higher-level figure, we follow a simple
rule: the dataflows coming into and going out of a bubble at one level must
correspond to the dataflows coming into and going out of an entire figure at the next
lower level which describes that bubble. Figure 2.17 shows an example of a balanced
dataflow diagram; Figure 2.18 shows two levels of a DFD that are out of balance.

21
Figure 2.17 A Balanced DFD Fragment

22
Figure 2.18 An Unbalanced DFD Fragment

How do you show stores at the various levels? The guideline is as follows:
show a store at the highest level where it first serves as an interface between two or
more bubbles; then show it again in EVERY lower-level diagram that further
describes (or partitions) those interface bubbles. Thus, Figure 2.18 shows a store that
is shared by two high-level processes, A and B; the store would be shown again on
the lower-level figures that further describe A and B. The corollary to this is that local
stores, which are used only by bubbles in a lower-level figure, will not be shown at
the higher levels, as they will be subsumed into a process at the next higher level.

23
Figure 2.19 Showing Stores at Lower Levels

2.2.3 Extensions to the DFD for real-time systems.


The flows discussed throughout this chapter are data flows; they are pipelines
along which packets of data travel between processes and stores. Similarly, the
bubbles in the DFDs we have seen up to now could be considered processors of data.
For a very large class of systems, particularly business systems, these are the only
kind of flows that we need in our system model. But for another class of systems, the
real-time systems, we need a way of modeling control flows (i.e., signals or
interrupts). And we need a way to show control processes — (i.e., bubbles whose only
job is to coordinate and synchronize the activities of other bubbles in the DFD). These
are shown graphically with dashed lines on the DFD, as illustrated in Figure 2.19.

24
Figure 2.20 A DFD with Control Flows and Control Processes

A control flow may be thought of as a pipeline that can carry a binary signal
(i.e., it is either on or off). Unlike the other flows discussed in this chapter, the control
flow does not carry value-bearing data. The control flow is sent from one process to
another (or from some external terminator to a process) as a way of saying, “Wake
up! It’s time to do your job.” The implication, of course, is that the process has been
dormant, or idle, prior to the arrival of the control flow.
A control process may be thought of as a supervisor or executive bubble
whose job is to coordinate the activities of the other bubbles in the diagram; its inputs
and outputs consist only of control flows. The outgoing control flows from the control
process are used to wake up other bubbles; the incoming control flows generally
indicate that one of the bubbles has finished carrying out some task, or that some
extraordinary situation has arisen, which the control bubble needs to be informed
about. There is typically only one such control process in a single DFD.
As indicated above, a control flow is used to wake up a normal process; once
awakened, the normal process proceeds to carry out its job as described by a process
specification. The internal behavior of a control process is different, though: this is
where the time-dependent behavior of the system is modelled in detail. The inside of a
control process is modelled with a state-transition diagram (Harel, 1988) which shows
the various states that the entire system can be in and the circumstances that lead to a
change of state.

25
2.3 Keynote Papers
In compiling this report, a number of keynote works from current literature
were identified. These helped provide both background and context to the project. In
this subsection we briefly describe those works.

2.3.1 Modern structured analysis.


This seminal tome from 1989 describes in depth the Yourdon Structured
Analysis methodology (Yourdon, 1988). Much has happened since the 1980s, but the
principles it covers – staple tools of the systems analyst (or requirements engineer in
modern parlance) such as Entity-Relationship Modeling, Data Dictionaries, State-
Transition diagrams and Data Flow Diagrams (the focus of this project) - are still
valid today. Indeed a recently updated edition of the book (entitled “Just Enough
Structured Analysis”) considers interim developments such as client-server
technology, the Internet/Web, wireless computing, the rise and fall of the dot-coms,
the growing popularity of object-oriented and component-based technologies, etc. is
still relevant.
The remaining keynote works provide a context to this project, considering the
broader topic of requirements and software engineering, as well as related topics
including traceability and dependability.

2.3.2 Requirements engineering: a road map.


The contribution of sub-standard requirements to systems that are delivered
late, over budget and which don’t fulfill the needs of users is well-known.
Requirements Engineering (RE) is a term that emerged in the 1990s encapsulating all
of the activities involved in eliciting, understanding, analysing, documenting and
managing requirements. The term engineering is intended to convey the impression
that this is accomplished through a practical, systematic and repeatable process, even
in areas with more philosophical and social underpinnings. In essence the term
superseded what was previously known as Systems Analysis.
Nuseibeh and Easterbrook. (2000) presents an overview of the field of
software systems requirements engineering (RE). Their paper describes the main
areas of RE practice, and highlights some then key open research issues. The authors
present an excellent summary of the core requirements engineering activities -

26
elicitation, modeling and analysis, communicating requirements, agreeing
requirements, and evolving requirements – in a format ideal for those new to the field.
Many of the open research issues they highlight remain that way today. These
include the need for richer models for capturing and analysing non-functional
requirements, such as security, performance and dependability; means for bridging the
gap between requirements elicitation approaches and more formal (mathematical)
specification and analysis techniques; and systematic means for reusing requirements
models (in turn facilitating better selections of commercial of the shelf, or COTS
software).

2.3.3 An analysis of the traceability problem.


Traceability refers to the ability to describe and follow the life of a
requirement, in both a forwards and backwards direction (i.e., from its origins,
through its development and specification, to its subsequent deployment and use, and
through all periods of on-going refinement and iteration in any of these phases).
This paper by Gotel and Finkelstein (1994) has arguably become one of the
most significant on the subject of traceability. In effect Gotel’s PhD thesis proposal,
the paper presents a thorough analysis of the (then) state of the art in terms of
traceability tools, methods and frameworks. The point being that in order to develop
effective tools and methods it is first necessary to ensure the problems they are
intended to solve - i.e., the aim, purpose or objective of traceability - is clearly
defined.
The paper is also notable for the set of definitions (traceability taxonomy) it
presents. It was common then to talk about the direction and type of artifact being
traced; the terms forward and backward traceability were already being (and continue
to be) used when describing navigation to ‘upstream’ and ‘downstream’ artifacts.
Likewise the notion of horizontal and vertical traceability were (are) accepted terms to
describe navigation between artifacts of the same and different types respectively.
Gotel however further differentiated between pre and post requirements specification
traceability in referring to production and deployment of a requirement, arguing in the
case of the former that this was a much neglected aspect.

27
2.3.4 No silver bullet - essence and accidents of software engineering.
This well-known paper is an almost certain inclusion on any software
engineering undergraduate student list, or post graduate for the matter! Written by
Fred Brooks in 1986 (Brooks, 1986) it argues that software engineering is too
complex and diverse for a single "silver bullet" to improve most issues, and that each
issue accounts for only a small portion of all software problems. That hasn’t
prevented every new technology and practice to have emereged ever since from being
trumpeted as a silver bullet to solve the software crisis (cf. formal methods, UML,
CASE tools, maturity models, etc.) In that sense it highlights the need for perspective;
Yes object-oriented methods undoubtedly have their place; but then so too still do
formal and structured analysis methods.

2.3.5 Dependable systems.


During the course of this project, the author was introduced to the notion of
dependable systems, i.e. systems where a level of trustworthiness is required which
allows dependence to be justifiably placed on the services they deliver. Since
advances in requirements engineering are often aimed at improving the dependability
of systems it is useful to explain what is meant by the term dependability in the
context of computer-based systems.
Jean-Claude Laprie (Laprie, 1989) formulated a synthesis of work by the
reliable and fault tolerant computing scientific communities towards a consistent set
of concepts and terminology for dependable computing. Previously, terms such as
safety, reliability and security tended to considered ‘special cases’ of the considered
discipline. For example, initially the main concern was that computers should work
reliably (in terms of continuity of service). But then the utilization of computers in
critical applications brought in the care for safety (in terms of non-occurrence of
catastrophic failures). The safest system is one that does nothing, which is not very
useful, so the safety field tended to regard reliability as a subset of safety (or treat the
reliability as an attribute of the system quality). Likewise the emergence of distributed
systems brought in the care for security. Again a secure system must fulfill its
functionalities, and security violations can also be catastrophic, so the security field
tended to consider safety and reliability as subsets of security.
Laprie’s synthesis of dependability contains three parts: the attributes
(availability, reliability safety, confidentiality, integrity and maintainability – security

28
is considered the concurrent existence of availability, confidentiality and integrity),
threats to these attributes (faults, errors and failures) and the means by which
dependability is achieved (fault prevention, fault tolerance, fault removal and fault
forecasting).

2.4 Summary
This chapter has demonstrated that the dataflow diagram is a simple but
powerful tool for modeling the functions in a system. The material in this chapter is
sufficient for modeling most classical business-oriented information systems. For
those engaged in real-time systems development (e.g., process control, missile
guidance, or telephone switching), the real-time extensions discussed in Section 2.2.4
will be important; for more detail on real-time issues, consult (Ward & Mellor, 1985).
To be effective in an industrial setting, any methodology needs tool support. In
Chapter Three we describe development of a tool supporting Data Flow Diagrams
based on the Yourdon methodology.

29
Chapter 3
Tool Support for Yourdon Data Flow Diagram Methodology

3.1 Introduction
Irrespective of its potential utility, any modeling notation requires effective
tool support to render it practical in an industrial context. Therefore in this chapter we
describe development of a (prototype) CASE tool for drawing Data Flow Diagrams
expressed in the Yourdon notation, from the initial requirements, through design and
implementation.

3.2 Development Process


As Data Flow Diagrams are a well understood notation (meaning that
requirements for the tool would be relatively stable throughout the project lifecycle), a
Waterfall-based model (Royce, 1970), as shown in Figure 3.1, was used to underpin
development.

Figure 3.1 Waterfall Model of Software Process for Yourdon DFD Tool

The Waterfall model proceeds from start to finish in a purely sequential


manner. For example, the first task is to produce a requirements specification
articulating the services and functions the system shall provide and constraints (non-
functional requirements), both under which the system must operate, and on the
development process itself. In this case, we employed Use Cases (Folwer & Scott,

30
1999) to discover functional requirements, which together with the constraints, were
then expressed in natural language.
When the requirements are fully completed, emphasis moves on to design,
with the aim to provide a "blueprint" for implementers (coders) to follow; i.e. a plan
for implementing the requirements. Here, Class Diagrams (Folwer & Scott, ibid)
were used to determine the static structure of the system (both in terms of the tool
itself and the database interface), while Sequence Diagrams (Folwer & Scott, ibid.)
enabled us to model interactions among objects (instances of classes) and processes
over time. A database schema was then produced describing the way in which these
objects are represented in a database, and the relationships among them. The
Graphical User Interface, so important in a tool such as this, was designed through
consultation with systems analysts and other potential users (although in keeping with
findings of past research, these were the main source of requirements instability).
Once the design is fully completed, then the system can be implemented. For
this task, Eclipse and the GMF (Graphic Modelling Framework) were chosen.
As with any system developed according to a Waterfall type methodology, we
began testing and debugging the system towards the latter stages of implementation;
any faults introduced in earlier phases are removed here. Under normal
circumstances, the system would enter service at this point, whereupon it moves into a
maintenance phase which continues throughout its operational life, allowing new
functionality to be introduced and any remaining bugs removed. However, the
maintenance phase is not relevant here as we were merely developing a prototype to
support and test a hypothesis.

3.3 Requirements
The requirements for the tool were gathered in two ways; a comprehensive
review of Yourdon Structured Analysis was undertaken (as described in Chapter
Two). Among the requirements yielded were the following:
• The system shall support the Yourdon graphical notation syntax,
representing processes as circles, flows as arcs, stores as parallel lines and
terminators as rectangles.
• The system shall enable terminators to connect to processes (only) using
flows.

31
• The system shall enable processes to connect to stores and to other
processes (only).
• The system shall support control processes as dashed circles and control
flows as dashed arcs.
• The system shall only enable control processes to connect to processes
using control flows (direction of flow from control process to process).
• The system shall support developments of DFDs as a series of levels with
each level providing successively more detail about a portion of the level
above it.
• The system shall support balanced DFD levelling to ensure consistency
between diagram levels.

In addition, a survey of modern CASE tools was undertaken to identify user


interface requirements. These include the following:
• The system shall display the menu along the top of the screen.
• The system shall use cascading menus.
• The system shall maximize the space available for drawing the DFDs.
• The system shall provide a micro-view of the whole diagram.
• The micro-view shall enable zooming into portions of the diagram in the
main window
• The system shall contain a window displaying a summary of relationships
used in each diagram.
• The system shall enable processes, terminators, stores, flows, control
processes and control flows to be chosen for adding to a diagram using a
palette of buttons.

A scenario for using the tool was then specified as follows:


Create New DFD
1) The user selects new from File Menu
2) Systems displays location to store the file
3) The user selects the location to save the file
4) The system displays the main screen with a blank drawing area

32
5) The user selects appropriate DFD element type (process, terminator, store,
or control process) from palette
6) User left clicks position on location to position element type in main
drawing window
7) User names the new element
// repeat steps 5)-7)
8) User selects flow or control flow from palette
9) User moves mouse over source element
10) User left clicks mouse
11) User moves mouse over target element
12) User left clicks mouse
// repeat steps 5)-12) until diagram complete

3.4 Design
Given the requirements above, a Class Diagram was designed showing the
class structure, along with a Sequence Diagram showing interactions between classes
in the system and the user. These appear in Figures 3.2 and 3.3 respectively.

Figure 3.2 Class Diagram for Yourdon CASE Tool

33
Figure 3.3 Sequence Diagram for Yourdon CASE Tool

A design was produced for the graphical user interface based on the
requirements described in Section 3.3. This is shown in Figure 3.4

Figure 3.4 User Interface Design for Yourdon CASE Tool

Following design, the system was implemented using the Eclipse GMF
(Graphic Modeling Framework)

34
3.5 Summary
This chapter has described the process used to develop the Yourdon DFD tool,
as well as introducing key requirements and design artefacts. The tool is demonstrated
using case studies in the following chapter

35
Chapter 4
Case Studies Using Yourdon DFD Tool

4.1 Introduction
This chapter presents selected case studies undertaken in validating the
Yourdon Data Flow Diagramming tool described in introduced in Chapter Three.
Specifically, two case studies will be used, one for an Automatic Telling Machine and
one for a Mail Order company. In each case, the system requirements (facts assumed
to have been gathered from key stakeholders) are described and then the
corresponding screenshots from our Yourdon CASE tool inserted to provide
illustration.

4.2 Data Flow Diagram for Automatic Telling Machine (ATM)


The following requirements have been established for the Automatic Telling
Machine.
• The ATM will provide automatic banking facilities to the bank’s
CUSTOMERS
• Four processes have been identified as being vital to the ATM’s operation:

1) Validate card
This process receives the card and id keyed by the CUSTOMER. These
details are checked by reading information in a file (data store) of client account
details. If invalid, an error message is displayed to the CUSTOMER and their card
returned. If valid, a valid id signal is sent to the Select Service function.

2) Select service
This process receives the valid id signal from Validate Card, followed by a
service request instruction from the CUSTOMER. If the CUSTOMER chooses to
deposit money a deposit service request is sent to the Deposit process, whereas if the
CUSTOMER chooses to withdraw money, a withdraw service request is sent to the
Withdraw process.

36
3) Deposit
This process receives the deposit service request from Select Service. It then
accepts the amount to be deposited and the deposit envelope from the CUSTOMER.
The client account file is also updated and a deposit receipt issued to the
CUSTOMER together with their returned card.

4) Withdraw
This process receives the withdraw service request from Select Service. It
then accepts the cash amount required which is entered by the CUSTOMER. A check
(read operation) is then made of the client account. If there is enough money
available, client account is updated and cash dispensed to the CUSTOMER along
with their card and a withdraw receipt. If there is not enough money in the account,
an insufficient funds message is displayed to the CUSTOMER and their card returned.

Note: the following convention applies in the above:


• Bold indicates a Process
• Italic indicates a data flow
• Upper case indicates an EXTERNAL ENTITY
• Underlining indicates a Data Store

The Context Diagram for the ATM developed using our tool appears in Figure
4.1

37
Figure 4.1 Context Diagram for ATM

The corresponding ‘Figure Zero’ or Overview diagram representing the


highest-level view of the major functions within the ATM system, as well as the
major interfaces between those functions is shown in Figure 4.2.

38
Figure 4.2 Overview Diagram for ATM

4.3 Data Flow Diagram for Mail Order Company


The following requirements have been established for the mail-order
company.
• The mail order company order deals with CUSTOMERS and
MANUFACTURERS.
• Seven processes have been identified as crucial to the business:

1) Answer enquiry
The process of receiving an initial enquiry and sending out a catalogue of the
company’s products to CUSTOMERS in response. The initial enquiries are stored in a
file called Enquiries.

Further investigation of Answer Enquiry reveals two sub-processes


- Check enquiry: This process receives the initial enquiry which is stored on
file (Enquiries). A catalogue request is then sent to a further process called
Send Catalogue.

39
- Send catalogue: This process receives the catalogue request and sends a
catalogue to the CUSTOMER in response.

2) Validate order
This process receives orders from the CUSTOMER; orders are validated by
reading information from a Catalogue file; valid orders are stored in a Customer
Order file; invalid items are sent to the Return Order function of the business; valid
items are sent to a Process Order function.

3) Return order
This process receives the invalid items list and sends out to the CUSTOMER a
returned order

4) Process order
This process receives valid items from the Validate Order process. To compile
the order, it is necessary to read the Stock file to check item availability. In addition,
the Customer Order file is updated and a record entered in a further file recording
Deliveries to Customers. A note of out of stock items is sent to the Stock Control
function and an items unavailable report sent to the Return Order function, while a
purchase note recording the items to be dispatched to the customer is sent to the
Customer Accounting function.

Further investigation of Process Order reveals two sub-processes


- Check Stock: This process receives the valid items from Validate Order,
reads the Stock file to check for item availability and sends a list of items
available to Arrange Delivery. In addition, an items unavailable report is
sent to the Return Order function and details of out of stock items sent to
Stock Control.
- Arrange Delivery: Receives items available from Check Stock, updates
the Customer Order, Stock and Deliveries to Customers files and sends a
purchase note to the Customer Accounting function.

40
5) Stock control
This process receives details of out of stock items from Process Order (or
more precisely, its Check Stock sub-process). A Deliveries Pending file is checked to
see if these items are already on order. If not, a manufacturer order is sent to the
MANUFACTURER requesting items out of stock; a delivery note is received from
the MANFACTURER when these items arrive. The Stock and Deliveries Pending file
are also updated in response to items received. Finally a payment alert is sent to the
Manufacturer Accounting Function.

Further investigation of Stock Control reveals three sub-processes


- Check pending: This process receives details of out of stock items from
Process Order. It checks the Deliveries Pending file to see if these items
are already on order. If not the out of stock items list is sent to a
Manufacturer Order function.
- Manufacturer order: This process receives the out of stock items list from
Check Pending and sends in response a manufacturer order to the
MANUFACTURER requesting items out of stock. The Deliveries Pending
file is also updated.
- Receive from manufacturer: This process receives a delivery note from the
MANFACTURER when requested items arrive. In response, pending
items are removed from the Deliveries Pending file and the Stock file
updated. This process is also responsible for sending a payment alert to the
Manufacturer Accounting Function.

6) Customer accounting
This process receives the purchase note from Process Order (or more
precisely, its Arrange Delivery sub-process). A customer invoice is then sent to the
CUSTOMER. On receiving payment in, the Revenue file is updated.

7) Manufacturer accounting
This process receives the payment alert note from Stock Control (or more
precisely, its Receive from Manufacturer sub-process). On receiving a

41
manufacturer invoice from the MANUFACTURER, payment out is sent to the
MANUFACTURER and the Outgoing Ledger file updated.

Figure 4.3 presents Context Diagram (Level 0) showing the top level process
and its external entities.

Figure 4.3 Context Diagram for Mail Order Company

The following figure (Figure 4.4) presents the Overview Diagram of the seven
main business processes for the mail order company. It shows the flows from external
entities to processes, flows between processes, and flows to/from data stores (files)
indicating read/write operations.

42
Figure 4.4 Overview Diagram for Mail Order Company

The following figures (4.5, 4.6 and 4.7) show DFDs showing the relevant sub-
process for the Answer Enquiry, Process Order and Stock Control functions.

43
Figure 4.5 Answer Enquiry Sub-Process

Figure 4.6 Process Order Sub-Process

44
Figure 4.7 Stock Control Sub-Process

4.4 Summary
This chapter has demonstrated support for the Yourdon DFD CASE tool
described in Chapter 3. Two case studies were used to do this, the first showing
Context and Overview diagram, the second, showing Context and Overview, as well
as further process decompositions of three Overview diagram processes.

45
Chapter 5
Conclusions and Future Work

5.1 Introduction
This chapter summarizes the project, draws conclusions, highlights limitations
and provides some directions for further work.

5.2 Summary and Conclusions


This project has defined and implemented a CASE tool to support the
Yourdon Structured Analysis methodology, or more specifically the Data Flow
Diagramming component.
The motivation for the project was to extend further previous work on MAST,
an integrated software engineering framework, by incorporating Structured Analysis
to complement existing support for object-oriented and formal methods (Mason &
Tianvorakoon, 2006).
In this report we provided a literature review of related works, as well as
keynote papers on software and requirements engineering. Development of the tool
through the requirements design and maintenance was also documented (which we
anticipate helping contribute to maintenance and to future extensions of MAST), as
were a brace of case studies demonstrating its usage.
One problem with the current implementation is that the tool does not support
balanced diagram leveling, as discussed in sub-section 2.2.3.6. This was caused by an
implementation problem which was never properly solved; namely difficulties in
linking the separate diagrams.

5.3 Future Work


During the course of this project, several areas worthy for future investigation
were identified. These include the following:
• Formal specification of the tool.
• Extension of tool to support complete Yourdon notation and definition of
real-time properties
• Tool enhancements
• Further evaluation using industrial case studies

46
The following sections provide a brief introduction to possible directions of
further work in the areas identified above.

5.3.1 Formal specification of Yourdon CASE tool.


Following on the (brief) discussion on dependable systems in Chapter Two, it
would be worthwhile developing a complete formal semantics for the Yourdon
notation, thereby removing any impreciseness and ambiguities in its interpretation and
opening the door to its potential use on the development of dependable systems.

5.3.2 Extensions to the tool.


A number of possible extensions to the tool emerged during the course of this
project, including support for Entity-Relationship Modeling, Data Dictionaries and
State-Transition diagrams, all of which are part of the Structured Analysis Method

5.3.3 Tool enhancements.


While sufficient to enable proof of concept, there are a number of general and
specific areas in which the prototype tool discussed here could be improved. For
example, the inclusion of an auto-arrange facility and development of a ‘DFD
explorer’ (tree view mode) to enhance navigation of large diagrams. In addition, it
would be useful to be able to include a Process Definition Language feature, enabling
the activities involved in producing output from input to be described for functional
primitives. The problem with leveling described above also needs correcting.

5.3.4 Further evaluation using industrial case studies.


It goes without saying that the tool can only be improved by further testing,
ideally through in depth application to ‘real’ examples and the findings published in
peer reviewed literature. In truth, the level of evaluation so far, though consistent with
that expected within the confines of a Master degree Special Study, has nonetheless
involved only small worked examples using synthetic data. It is therefore our
intention to continue testing the tool using ever larger examples (if possible using
actual cases obtained from industry) and if necessary, refine the toolset based on our
findings.

47
5.4 Lessons Learned
The original platform chosen for the project was Eclipse IDE + GEF (Graphic
Editing Framework). However, GEF was found to be unsuitable for developing a
CASE tool as it is really just a paintbrush like tool. Instead, the GMF (Graphic
Modeling Framework) - a new model framework from Eclipse was considered more
appropriate. GMF provides two main components which are Runtime and Generation
(tooling) for developing graphical editors. Runtime provides service layer and
significant diagramming capabilities, while Generation uses models to define
graphics, tooling, mapping to domain and generating code to the target Runtime.
Before starting the project, my knowledge was only of the framework from
Eclipse which is used to implement many case tools. In retrospect a more thorough
study of tools available should have been done.

48
References

Boehm, B. W. (1981). Software engineering economics. Prentice Hall.


Brooks, F. Jr. (1986). No silver bullet - essence and accidents of software
engineering (Invited Paper), IFIP Congress 1986: 1069-1076.
De Marco, T. (1979). Structured analysis and systems specification. Prentice-Hall.
Eva, M. (1994). Structured systems analysis and design methodology, SSADM
Version 4: A user's guide (2nd ed.). McGraw-Hill.
Folwer, M., & Scott, K. (1999). UML distilled: A brief guide to the standard object
modeling language (2nd ed.). Addison-Wesley.
Gotel, O. C. Z., & Finkelstein, A. C. W. (1994). An analysis of the
requirements traceability problem. Proceedings of the First International
Conference on Requirements Engineering (ICRE).
Harel, D. (1988). On visual formalisms. Communications of the ACM, 31(5),
514-530.
Hatley, D., & Pirbhai, I. (1987). Strategies for real-time system specification. Dorset
House Publishing Company
Kotonya, G., & Sommerville, I. (1998). Requirements engineering: Processes
and techniques. Wiley.
Krutchten, P. (2000). The rational unified process: An introduction (2nd ed.).
Addison-Wesley.
Laprie, J. C. (1989). Dependability: A unifying concept for reliable computing and
fault tolerance, In T. Anderson (Ed.), Dependability of resilient computers, BSP
professional books (pp. 1-28).
Mason, P. (2006). Managing the evolution of dependability arguments for
e-business systems, Proceedings of the Fifth International Conference on e-
Business, Bangkok, Thailand.
Mason, P., & Mattayom, C. (2007). Configuring safety arguments for product
families. Proceedings the 4th International Joint Conference on Computer
Science & Software Engineering, Kon Kaen, Thailand.
Mason, P., & Tianvorakoon, A. (2006). Model-based approach to systems
traceability for dependable systems, Proceedings First International Conference
on Knowledge, Information and Creativity Support Systems, Ayutthaya, Thailand.

49
Mattayom, C. (2006). Version management of safety cases. Unpublished master’s
thesis, Shinawatra University, Bangkok, Thailand.
Nuseibeh, B., & Easterbrook, S. (2000). Requirements Engineering: A
roadmap. Proceedings of the International Conference on Software Engineering,
35-46.
Orr, K. (1981). Structured requirements definition.
Paynter, S., Armstrong, J., & Haveman, J. (2000). ADL: An activity description
language for real-time networks. Formal Aspects of Computing, 12(2),
120-144.
Ross, D., & Schoman, K. (1977). Structured analysis and design technique (SADT).
IEEE Transaction on Software Engineering 3(1).
Royce, W. (1970). Managing the development of large software systems:
Concepts and techniques. In Technical Papers of Western Electronic Show and
Convention (WesCon), Los Angeles, USA.
Spanoudakis, G. (2004). Rule-based generation of requirements traceability
relations. Journal of Systems and Software, 72(2), 105-127.
Spivey, J. M. (1989). The Z notation: A reference manual. Prentice-Hall.
Tianvorakoon, A. (2006). Model-based approach to system traceability. Bangkok:
Shinawatra University.
Ward, P. T., & Mellor, S. J. (1985). Structured development for real-time systems.
New Jersey: Prentice Hall.
Yourdon, E. (1988). Modern structured analysis. Prentice-Hall.

50
Biography

Name: Angkanit Ponsiriyaporn


Date of Birth: September 11, 1982
Place of Birth: Bangkok, Thailand
Institutions Attended: 2nd honor in B.Sc. Computer Science, Bangkok University
Position and Office: Senior Software Engineer, Office of Computer Clustering
Promotion
Home Address: 44/15 Poon Suk, Village Leab Klong 3 Rd., Klong 3,
Klong Luang, Pathumthani
Telephone: 02-5647333
E-mail: angkanit_p@yahoo.com

51

Vous aimerez peut-être aussi