Académique Documents
Professionnel Documents
Culture Documents
In today’s world we see that there are many standalone systems available and we use
them for our day to day use. Now these systems technically are categorized as
‘Embedded systems’. Hereby with the following work we are just trying to focus on the
design of these small yet complex and real time systems with simpler and easier methods
which would consume less time comparatively to developing such systems from the
scratch.
1.1 Overview
“An embedded system is some combination of computer hardware and software, either
fixed in capability or programmable, that is specifically designed for a particular kind of
application device.”
Modelling is a central part of all the activities that lead up to the deployment of a good
system. We build models to communicate the desired structure and behaviour of our
system we often visualize and control the system’s architecture. We build models to
better understand the system we are building, often exposing opportunities for
simplification, rescue and manage risk. Modelling is a proven and well accepted
engineering technique. New electrical devices, from microprocessors to telephone
switching systems require some degree of modelling to better understand the system and
to communicate those ideas to others. Also, good planning and modelling leads to the
creation of better system [1].
We are here trying to understand the entire modelling phenomena of creating executable
systems from Use case diagrams to the statecharts to the actual executable model. Hence
we are trying to actually derive the executable model from just the modelling of the
system using Unified Modeling Language by Grady, Rumbaugh and Jacobson[1]. Thus
the overall risk of the system failing, after the design and implementation is done during
the testing phase reduces to a great lot of extent. This helps a lot since after all the
hardwork we put in for building a system and then finally an error or say a bug may just
blow up the system or say a small flaw in design which is say following a waterfall
model may bring up many concurrent problems along with the design and
troubleshooting them may possibly cause a lot of loss of capital and resources.
The choice of what models to create has a profound influence on how a problem
is attacked and how a solution is shaped.
Every model may be expressed at different levels of precision.
The best models are connected to reality.
No single model is sufficient. Every nontrivial system is best approached through
a small set of nearly independent models.
In software, there are several ways to approach a model. The two most common ways are
from an algorithmic perspective and from an object-oriented perspective. The traditional
view of software development takes an algorithmic perspective. In this approach, the
main building block of all software is the procedure or function. This view leads
developers to focus on issues of control and the decomposition of larger algorithms into
smaller ones. There's nothing inherently evil about such a point of view except that it
tends to yield brittle systems. As requirements change (and they will) and the system
grows (and it will), systems built with an algorithmic focus turn out to be very hard to
maintain.
We have hereby tried to actually emphasize on some easier methods to program and
model the complex, real time systems rather than opting for the complex manual coding
techniques. Here we are trying to prove our point by first trying to model a system using
Telelogic Rhapsody, a UML CASE tool by modeling a wrist watch where the underlying
behavior of the system is being programmed using Dr. David Harel’s State Chart
language.
Then we have tried to synthesize a stopwatch prototype using the Live Sequence Charts
programming tool, called the Play Engine, by Dr. David Harel. The underlying
mechanism is being programmed by the scenario based programming language known as
the Live Sequence Charts. Dr. David Harel has been working at the Weizman Institute of
sciences, Israel and has given these two languages the state chart language which helps in
programming the systems on the state based approach where the systems are said to attain
various states as per their execution, thus making it a behavioral description language and
the Live Sequence Charts which help in programming the controllers of these complex
reactive systems using the scenarios that can be derived from the requirements that the
user mentions, thus making it easier for the developer to synthesize a code for the
controller of the system directly from the played behavior and the requirements supplied.
We are also trying to propose a method of modeling these real time systems using the
Quantitative Discrete time Duration Calculus logic from which an implementable Esterel
Code can be synthesized so that the entire phenomena of coding real time systems or
embedded systems become all the more simpler.
Chapter 2 : UML based design of Embedded Systems.
Desktop systems usually have more than enough resources, and they access the hardware
only through a multi-layered virtual machine interface which is not always the case with
the embedded systems. Since the dependability requirements are of secondary importance
in desktop systems, the well known methods aim at rapid development, team
collaboration support and low maintenance costs. The same approach is now being tried
for the embedded systems too. Embedded systems are applied for performing tasks
closely related to the hardware, which impacts, besides the interface design and
functionality constrained by the specific platform and application environment, typically
the dependability (fault tolerance) and timing requirements as well. In these cases
sophisticated task, communication scheduling and special algorithm designs are needed.
The fine-tuned tools and techniques enable the development of highly efficient systems
exploiting most of the platform-specific features.
In recent years, the functions demanded for embedded systems have grown so numerous
and complex that development time is increasingly difficult to predict and control. This
complexity, coupled with constantly evolving specifications, has forced designers to
consider intrinsically flexible implementations—those they can change rapidly. For this
reason, and because hardware-manufacturing cycles are more expensive and time-
consuming, software based implementation has become more popular. Processors’
increased computational power and correspondingly decreased size and cost let designers
move increasingly more functionality to software. However, along with this move comes
increasing difficulty in verifying design correctness. This verification is critical due to
safety considerations in several application domains, transportation and environment
monitoring, for example. In traditional, PC-like software applications, these safety issues
typically don’t come up. In addition, the software world has paid little attention to hard
constraints on software’s reaction speed, memory footprint, and power consumption, all
crucial issues for embedded systems, because they are relatively unimportant in
traditional software development. In embedded systems design, such hard software
characteristics are unavoidable. It is no wonder, then, that there is a crisis in embedded
software design. Along with the pressure on system designers to choose flexible
implementations, the industry is also witnessing IC manufacturers growing preference for
chips that will work for several designs. This lets manufacturers amortize development
cost over a large number of units. We believe that addressing the embedded software
crisis and the manufacturing problem necessitates radical changes in embedded systems
design. To cope with the tight constraints on performance and cost typical of most
embedded systems, programmers write today’s embedded software using low-level
programming languages such as C or even assembly language. The tools for creating and
debugging embedded software are basically the same as those for standard software:
compilers, assemblers, debuggers, and cross-compilers. The difference is in quality.
Most tools for embedded software are rather primitive compared to equivalent
tools for richer platforms. However, besides these tools, embedded software also
needs hardware support for debugging and performance evaluation, which are far
more important for embedded software than for standard software.
Most system companies have enhanced their software design methodologies to increase
productivity and product quality. Many have rushed to adopt object-oriented approaches
and other syntactically driven methods. Such methods certainly have value in cleaning up
embedded software structure and documentation, but they barely scratch the surface in
terms of quality assurance and time to market. To address these concerns, the industry
has sought to standardize real-time operating systems (RTOSs)—through either de facto
standards or standards bodies such as the German automotive industry’s Open Systems
and the corresponding Interfaces for Automotive Electronics (OSEK) committee. RTOSs
and traditional integrated development environments dominate the embedded-software
market. Embedded-software design automation is still a small segment, but work in this
area could lead to large productivity gains.
The object oriented methodologies are based on visual design languages. One of the best
known of them is the Unified Modeling Language (UML [1]) that is a standard notation
for software visualization. Designers have become more and more familiar with UML
and UML-based CASE tools that offer support for automatic code and documentation
generation. The design language of the embedded applications is supported by UML
model extensions and a transformation that provides the specific input format of the
embedded development environment. Our approach promises this way the unification of
the advantages of the different development methods. Telelogic Rhapsody offers a widely
known visual design language, external model checkers, automatic code and
documentation generation. This real time system development environment offers real
time scheduling, support of platform-specific operating systems and fault tolerance
middleware. Other few competitors to Telelogic Rhapsody are Rational Rose RT, and
Telelogic Tau G2.
2.3 Need for such Model Driven Development Environment for Software, Systems
and Test.
Thus by creating a model that focuses on the functionality and the behavior of the
application and then later being able to translate that in to a model that contains the
details of the target, RTOS, middleware and communication mechanisms enables the
original model to be easily targeted for different environments.
2.4 Introduction to Telelogic Rhapsody.
Telelogic Rhapsody has the support for Unified Modeling Language (UML) and the
Systems Modeling Language (SysML) with advanced system design and analysis
capabilities, resulting in a complete model driven development environment that spans
the entire process, from requirements capture through analysis, design, implementation
and test. This automation tool reduces complexity, drives productivity and keeps the
systems and software engineers working in sync for faster, better quality results the first
time through the design process.
Rhapsody contains the components called Rhapsody Model Checker which ensures the
engineers that the model and its interfaces are complete and correct. Rhapsody’s built-in
simulation environment ensures that the design is free for behavioral errors. Rhapsody
Gateway provides a powerful traceability solution that uses the bi-directional interface
between the model and the leading requirements management and authoring tools and
thus helps in ensuring the design covers the original requirements. The Rhapsody
Reporter Plus capability automatically produces customizable systems engineering
specification documentation at the push of a button. It can generate documentation in
html, rtf, txt, Powerpoint or word directly from the design and can be updated or
regenerated each time the design changes for full documentation in a formal report or
design review. Rhapsody’s Automatic Test Case Generator also helps in checking the
consistency of the model by generating test cases from the user’s interaction with the
system and then help us to improve th system’s performance by comparing it with the
underlying Message Sequence Charts. Thus the developer engineer benefits from one
single tool chain that offers real solutions to daunting design, collaboration and test
challenges. The Configuration Management interface allows for concurrent,
collaborative engineering within the tool, enabling developers and engineering to create,
review, share and modify models within a single project, a company or even world wide
via the web. Rhapsody also supports four languages for the development and generation
of the implementable code. Rhapsody Code Generator generates codes from all the
structural and behavioral model views and combines them with a real time framework to
produce an executable application. It supports java, C , C++ and Ada within its IDE thus
programmers are able to enjoy enhanced productivity with low maintenance costs and
Dynamic Model Code Associativity allows the users to reflect the code edition also into
the model.
Applications can be built using Rhapsody and the Rhapsody framework is a reusable
infrastructure for RTOS applications which provides implementation for the real time
semantics of the model. One major advantage of this approach is that it provides platform
independence by abstracting away the platform API’s (RTOS). External code can also be
included in the model that is being generated by Rhapsody and the code can be either
visualized or reverse engineered. If it is reverse engineered Rhapsody can edit the code if
it is visualized it remains untouched from the technologies built inside Rhapsody. Also
the code can be directly ported onto the final platform via the Workbench tool that is
available within the Rhapsody and we can also set break points while the code is working
on the main board from within the Rhapsody.
2.5 Overview of the methodology.
Now to model the behavior we actually use the language called Statecharts invented by
Dr. David Harel, atleast in Rhapsody. This language has a behavioral approach and hence
it helps in modeling the behavior of the objects of the system. The system thus tries to
understand the behavior of the system designed by the software developer and the
developer can simultaneously keep a check over the main requirements of the user. The
developer can also simulate the model and once the model works as per the needs of the
user we can generate an actually implementable code out of the system designed.
The Unified Modelling Language (UML) is a standard language for writing software
blueprints. The UML may be used to visualize, specify, construct and document the
artefacts of a software intensive system. The UML is appropriate for modelling systems
ranging from enterprise information systems distributed web-based applications and even
to hard real time embedded systems. It is a very expressive language, addressing all the
views needed to develop and deploy such systems.
To understand UML, one needs to form a conceptual model of the language, and this
requires learning 3 major elements: basic building blocks, rules that put these blocks
together and some mechanisms. UML has 4 kinds of things which are actually the basic
building blocks of the Object Oriented System:-
1. Structural Things.
2. Behavioral Things.
3. Grouping Things.
4. Annotational Things.
Classes
First, a class is a description of a set of objects that share the same attributes,
operations, relationships, and semantics. A class implements one or more interfaces.
Graphically, a class is rendered as a rectangle, usually including its name, attributes,
and operations as shown in the figure.
NAME
ATTRIBUTES
OPERATIONS
Fig.2.1.Class
Interfaces
Use Cases
Fourth, a use case is a description of set of sequence of actions that a system performs that
yields an observable result of value to a particular actor. A use case is used to structure the
behavioral things in a model. A use case is realized by a collaboration. Graphically, a use
case is rendered as an ellipse with solid lines, usually including only its name, as in the
figure.
Active classes
Fifth, an active class is a class whose objects own one or more processes or threads
and therefore can initiate control activity. An active class is just like a class except
that its objects represent elements whose behavior is concurrent with other elements.
Graphically, an active class is rendered just like a class, but with heavy lines, usually
including its name, attributes, and operations, as in figure.
Figure 2-5 Active Classes
Components
Nodes
Seventh, a node is a physical element that exists at run time and represents a
computational resource, generally having at least some memory and, often,
processing capability. A set of components may reside on a node and may also
migrate from node to node. Graphically, a node is rendered as a cube, usually
including only its name, as in figure.
Figure 2-7 Nodes
These seven elements classes, interfaces, collaborations, use cases, active classes,
components, and nodes are the basic structural things that you may include in a UML
model. There are also variations on these seven, such as actors, signals, and utilities
(kinds of classes), processes and threads (kinds of active classes), and applications,
documents, files, libraries, pages, and tables (kinds of components).
Behavioral things are the dynamic parts of UML models. These are the verbs of a model,
representing behavior over time and space. In all, there are two primary kinds of
behavioral things.
Messages
State machines
Second, a state machine is a behavior that specifies the sequences of states an object
or an interaction goes through during its lifetime in response to events, together with
its responses to those events. The behavior of an individual class or a collaboration of
classes may be specified with a state machine. A state machine involves a number of
other elements, including states, transitions (the flow from state to state), events
(things that trigger a transition), and activities (the response to a transition).
Graphically, a state is rendered as a rounded rectangle, usually including its name and
its sub states, if any, as in figure.
Grouping things are the organizational parts of UML models. These are the boxes into
which a model can be decomposed. In all, there is only one primary kind of grouping
thing, namely, packages.
Packages
Annotational things are the explanatory parts of UML models. These are the comments
you may apply to describe, illuminate, and remark about any element in a model.
Note
There is one primary kind of annotational thing, called a note. A note is simply a
symbol for rendering constraints and comments attached to an element or a collection
of elements. Graphically, a note is rendered as a rectangle with a dog-eared corner,
together with a textual or graphical comment, as in figure.
This element is the one basic annotational thing you may include in a UML model. We'll
typically use notes to adorn our diagrams with constraints or comments that are best
expressed in informal or formal text. There are also variations on this element, such as
requirements (which specify some desired behavior from the perspective of outside the
model).
2.5.2.1 Dependencies
2.5.2.2 Associations
Second, an association is a structural relationship that describes a set of links, a link being
a connection among objects. Aggregation is a special kind of association, representing a
structural relationship between a whole and its parts. Graphically, an association is
rendered as a solid line, possibly directed, occasionally including a label, and often
containing other adornments, such as multiplicity and role names, as in figure.
2.5.2.3 Generalizations
2.5.2.4 Realizations
1. Class diagram
2. Object diagram
3. Use case diagram
4. Sequence diagram
5. Collaboration diagram
6. Statechart diagram
7. Activity diagram
8. Component diagram
9. Deployment diagram
A class diagram shows a set of classes, interfaces, and collaborations and their
relationships. These diagrams are the most common diagram found in modeling object-
oriented systems. Class diagrams address the static design view of a system. Class
diagrams that include active classes address the static process view of a system.
An object diagram shows a set of objects and their relationships. Object diagrams
represent static snapshots of instances of the things found in class diagrams. These
diagrams address the static design view or static process view of a system as do class
diagrams, but from the perspective of real or prototypical cases.
A use case diagram shows a set of use cases and actors (a special kind of class) and their
relationships. Use case diagrams address the static use case view of a system. These
diagrams are especially important in organizing and modeling the behaviors of a system.
A statechart diagram shows a state machine, consisting of states, transitions, events, and
activities. Statechart diagrams address the dynamic view of a system. They are especially
important in modeling the behavior of an interface, class, or collaboration and emphasize
the event-ordered behavior of an object, which is especially useful in modeling reactive
systems.
An activity diagram is a special kind of a statechart diagram that shows the flow from
activity to activity within a system. Activity diagrams address the dynamic view of a
system. They are especially important in modeling the function of a system and
emphasize the flow of control among objects.
A deployment diagram shows the configuration of run-time processing nodes and the
components that live on them. Deployment diagrams address the static deployment view
of architecture. They are related to component diagrams in that a node typically encloses
one or more components. This is not a closed list of diagrams. Tools may use the UML to
provide other kinds of diagrams, although these nine are by far the most common you
will encounter in practice.
Dr. David Harel presents a broad extension of the conventional formalism of the state
machines and state diagrams, that is relevant to the specification and design of complex
discrete event systems. Such as multi computer real time systems, communication
protocols and digital control units. Statecharts extend the state transition system diagrams
with 3 vital elements, known as hierarchy, concurrency and communication. Statecharts
are compact, expressive, compositional and modular and can express complex behavior.
A reactive system is even driven to a large extent and has to continuously react to
external and internal stimuli. The behavior of a reactive system is really the set of
allowed sequences of input and output events, conditions, time constraints and actions.
In general we can see the properties of the reactive systems in the following statements:
1. “in all airborne states, when yellow handle is pulled the seat will be ejected” The
above statement represents clustering.
2. “gear box change of state is independent of the breaking system”. This statement
represents independence or orthogonality.
3. “when selection button is pressed enter the selected mode”. This statement
represents change of one state to another.
4. “display mode consists of time display, date display and stopwatch display”. This
statement represents refinement of states.
Statechart have a flat fashion and the following is the difference between state
diagrams and statecharts.
State
A state can be represented by a rounded border rectangle. Each state has to be given
a name and all the statecharts have an initial state marked by an arrow with a solid
circle marked at one end of the edge of the arrow as shown below.
state_0
state
PressPowerButton[[Battery Inserted=TRUE]]/evStartSystem()
On Off
PressPowerOff
Depth and hierarchy can be modeled by encapsulating the states one into another and
we have to make sure that there is an initial state in the statechart.
state
PressPowerButton[Battery Inserted=TRUE]/evStartSystem()
On Off
PressPowerOff
The above figure exhibits encapsulation of two states ‘On’ and ‘Off’ into one single state
called ‘state’.
Events
Guard Conditions
Guard conditions are the conditions that guard the transition from one state to
another. In the above figure we see that the transition from ‘Off’ state to ‘On’ state is
done if the ‘PressPowerButton’ is pressed. But this transition is being guarded by the
condition that the Battery Inserted should be evaluated to TRUE. Thus this condition
acts as the Guard Condition for the transition.
Triggered Events
Events that are triggered by other events are said to be triggered events. In the above
figure the event evStartSystem() is the triggered event while the transition from ‘Off’
state to ‘On’ state is taking place.
Time out can be specified for any state when we specify the trigger as
‘tm(milliseconds)’.
Conditional entrances
Conditional entrances are used to select the states as per the conditions are satisfied.
Say in the figure below we reach a condition entrance state on trigger of event α and
then check which of the conditions P , Q or R are satisfied and then take the necessary
transition.
Selection entrances
Selection entrances are used to select the transition as per our wish once the selection
entrance state has been reached. In the figure below S is the selection state.
Figure
The above figure shows a state Y consisting of AND components A and
D, with the property that being in Y entails being in some combination of
B or C with E, F or G. We say that Y is orthogonal product of A and D.
The components A and D are no different conceptually from any other
super states. They also have defaults, internal transitions, etc. Entering Y
from outside, in the absence of any additional information, is actually
entering the combination (B, F) by the default arrows. If even α then
occurs, it transfers B to C and F to G simultaneously, resulting in the new
combined state (C, G). This illustrates a kind of synchronization: a single
event causing two simultaneous happenings. If on other hand, µ occurs at
(B, F) it affects only the D component, resulting in (B, E). This, in turn,
illustrates a certain kind of independence, since the transition is the same
whether the system is in B or in C in it’s a component. Both behaviors are
part of the orthogonality of A and D, which is the term we use to describe
the AND decomposition.
Figure A
The semantics of D is then the exclusive-or (XOR) of A and C; i.e., to be
in state D one must be either in A or in C, and not in both. Thus, D is
really an abstraction of A and C. The state D and its outgoing β arrows
thus capture a common property of A and C, namely, that β leads from
them to B. The decision to let transitions that leave a super state, such as β
in the above figure might also be approached from a different angle.
Consider the following situations:
Figure B Figure C Figure D
First we might have decided upon the simple situation of Figure B and
then state D could have been refined to consist of A and C, yielding Figure
C. Having made this refinement, the incoming α and β arrows become
underspecified, as they do not say which of A or C is to be entered.
Extending them to point directly to A and C, respectively, does the job,
and if the γ transition within D is added, one indeed obtains the Figure A.
Thus, clustering, or abstraction, is a bottom up concept and refinement is a
top down one, but both give rise to the or relationship between a state’s
substates.
State1
Off SwitchOff
C D
A/B B/C
On SwitchOn
Figure E
In the above state figure E, State1 we see that initially we are in the off
and SwitchOff state. Now when a transition from Off state to On state by
A, it generates an event called B and broadcasts it. Due tothis broadcast
communication we see that the transition from SwitchOff to SwitchOn
state takes place and since that transition also generates an event called C
we see that the transition from On to Off also takes place in the same tick.
Hence the final status of the state is Off and SwitchOn.
The following are the details of the model of the wristwatch that we have made. This
wristwatch has the following functionalities:
There are 4 buttons. They are the mode button, set button and up and down button. It has
a display for date and time as well. User can set 2 alarms and also use the stopwatch if he
wants to use. The user can set the time using the up and down keys which increment and
decrement the days, months, year, hours, minutes and seconds.
3.1 Requirements
The Use Case diagram contains all the specific functions that the system performs and
who initiates them. The use case contains actors and the use cases that they perform.
WristWatch
Watch
setAlarm1
Service_Man
setAlarm2
useStopWatch
User
setWatchTime
A use case is used to structure behavioral things in the model. The above use case
highlights that the user ca set the alarm1, alarm2 and time of the wrist watch. User can
also use the stop watch function of the wrist watch. Also the above use case also
highlights that the battery of the wristwatch can be changed by the service person. All of
them the 2 alarms, stopwatch and setting the wristwatch are the components of the entire
component wrist watch.
Thus first we try to draw the requirements of the system drawing the use cases. We now
are clear about what type of functions we need to incorporate in the basic wrist watch
system that we need.
3.3 The Object Diagram and the behavioral modeling using Statecharts.
Next after the Use cases we model the object diagram as to what number of classes and
objects we want and what their attributes and primitive functions should be. With this
object model diagram we are also clear of what type of relations will be existing between
different classes of the model. Also with this class diagrams we also will be actually
specifying the scope of the variables and the type of variables. Similarly, even for
primitive operations we can specify their fundamental properties as well like the return
type, number and type of attributes, etc. In Rhapsody we can also write the
implementation code of the operation of the class within the Object Model Diagram
specification itself. Also we can write the code for the constructors and destructors as
well.
The Object Model Diagram that has been modeled for the wrist watch contains a main
class called WristWatch and this class contains the constructor which actually makes an
instance of the rest of the four classes Alarm1, Alarm2, StopWatch and SetWristWatch.
These classes have been inherited in WatchAlarm1, WatchAlarm2, WatchStopWatch and
WatchSetWatch respectively. The class WatchFactory is a class which is dependent on
all the classes WatchAlarm1, WatchAlarm2, WatchStopWatch and WatchSetWatch. It
actually returns the instances of these classes and starts their respective statecharts
concurrently to help the system behave in the desired manner.
The class AbstractFactory is an abstract class which has the static variable theInstance of
its own type. The function theFactory is a static function who initializes the instances of
WatchAlarm1, WatchAlarm2, WatchStopWatch and WatchSetWatch for the WristWatch
class. These instances are necessary since we want the entire system should be working
over with all the statecharts in existence. The following is the description of all the
functions and classes of the system.
W atchAlarm2() WatchStopWatch()
WatchAlarm1()
3.3.1.2 Attribute Information for Class: WristWatch
Hours 0 False Public integer It stores the current hour that the
system needs to display.
Year 2007 False Public integer It stores the current year of the
system.
iMn integer In
iHr integer In
iMn integer In
evBatteryInserted
On
Off SetWristWatchOff
evBatteryRemoved
evAlarm2Off
Alarm2Tick evAlarm2On
evSetWristWatchOff
evSetWristWatchOn[isIn(On)]
evAlarmOn
tm(1000)/Tick();
fnChimePlayAlarm2(); evAlarmOn evAlarmOff
evAlarmOff SetWristWatchOn
evAlarm2On tm(1000)/Tick();
Alarm1Alarm2Tick Alarm1Tick fnChimePlayAlarm1();
tm(1000)/Tick(); evAlarm2Off
fnChimePlayAlarm1();
fnChimePlayAlarm2();
Alarm_1_Prober
evAlarmOn[isIn(On)]/System.out.println("Alarm1 is on");
Alarm1Off Alarm1On
Alarm_2_Prober
evAlarm2On[isIn(On)]/System.out.println("Alarm2 is on");
Alarm2On
Alarm2Off
StopWatch_Prober
evStartStopWatch[isIn(On)]
StopWatchOff StopWatchOn
evStopStopWatch
3.3.2 Class name: Alarm1
This class encompasses the attributes and the operations that are used for maintaining the
functionality of the first alarm.
3.3.2.1 Attribute Information for Class: Alarm1
Al arm1_Active
evPressButtonS
Description: The statechart is in initial state of HoursSetAlarm1. Now when the user
passes the events evPressButtonU() or event PressButtonD() it increments and
decrements the Hours for Alarm1. Now once the hours are set, the user can pass the event
evPressButtonS() and it would transit to the state MinsSetAlarm1. Again when the user
passes the events evPressButtonU() or event evPressButtonD(), it increments or
decrements the Minutes for Alarm1 respectively . Again when the user passes the event
evPressButtonS(), it transits to the state AlarmSetOn where the user has to set whether he
wants to set the alarm on or alarm off. If the user passes the event evPressButtonU(), then
the Alarm will be set as active and if the user passes the event evPressButtonD() will set
the alarm as deactive.
3.3.3 Class name: Alarm2
This class encompasses the attributes and the operations that are used for maintaining the
functionality of the second alarm.
3.3.3.1 Attribute Information for Class: Alarm2
Alarm2_Active
evPressButtonS evPressButtonS
HoursSetAlarm2 MinsSetAlarm2 AlarmSetOnOff
evPressButtonS
Description: The statechart is in initial state of HoursSetAlarm2. Now when the user
passes the events evPressButtonU() or event PressButtonD() it increments and
decrements the Hours for Alarm2. Now once the hours are set, the user can pass the event
evPressButtonS() and it would transit to the state MinsSetAlarm2. Again when the user
passes the events evPressButtonU() or event evPressButtonD(), it increments or
decrements the Minutes for Alarm2 respectively . Again when the user passes the event
evPressButtonS(), it transits to the state AlarmSetOnOff where the user has to set whether
he wants to set the alarm on or alarm off. If the user passes the event evPressButtonU(),
then the Alarm will be set as active and if the user passes the event evPressButtonD() it
will be set as deactive.
3.3.4 Class name: StopWatch
This class houses the variables, events and operations needed for the stopwatch function
of the wristwatch.
3.3.4.1 Attribute Information for Class: StopWatch
evPressButtonD/MilliSec=0;
Sec=0;
StopWatch_Active
evPressButtonU/itsWristWatch.gen(new evStartStopWatch());
StopWatchOff StopWatchOn
tm(10)/itsWristWatch.stopwatchTick();
evPressButtonU/itsWristWatch.gen(new evStopStopWatch());
Description: The user is in initial state StopWatchOff mode initially. If the user passes
the event evPressButtonU() then the user can move from StopWatchOff state to
StopWatchOn mode. It generates the evStartStopWatch event and sends it to WristWatch
class so that it gets affected in the statechart of WristWatch class. Then there is a timeout
specified for 10 milliseconds which triggers the function stopwatchTick() in WristWatch
class which internally increments the seconds and milliseconds as well until the user
passes the event evPressButtonU(). If the user specifies at any point the event
evPressButtonD then the stopwatch is reset and it stops ticking as well.
3.3.4.5 Class name: SetWristWatch
This class encompasses the variables, events and operations for setting the time.
3.3.4.5.1 Attribute Information for Class: SetWristWatch
Day 0 False public integer It stores the day to be set for the
WristWatch class date.
setWatc hTime_Activ e
SetDay SetMonth
SetYear SetHours
ev Press ButtonS
ev Press ButtonS
ev Press ButtonS/itsWristWatch.setTime(Hours,Mins,Sec s,Day ,Month,Year);
itsW ristWatch.Tick(); SetSecs SetMins
The rest of the classes are inherited from the classes mentioned above. WatchAlarm1 has
been inherited from the class Alarm1, WatchAlarm2 has been inherited from class
Alarm2, WatchStopWatch has been inherited from class StopWatch and WatchSetWatch
has been inherited from SetWristWatch class.
3.4 Sequence Diagrams
EN V :WristW atch :WatchAlarm :WatchAlarm :WatchStop :WatchSetW
1 2 W atch atc h
ev Battery inserted()
ev PressButtonM()
ev PressButtonM()
ev Press ButtonM()
ev PressButtonS()
ev PressButtonM()
ENV :WristWatch :WatchAlarm :WatchAlarm :WatchStop :WatchSetW
1 2 Watch atch
ev Battery Inserted()
ev PressButtonM()
ev PressButtonM()
ev PressButtonM()
ev PressButtonM()
This sequence diagram defines an instance of a possible run of the system that the event
evPressButtonM() will toggle the system from one statechart to another until the user
reaches the stopwatch mode. Once the user reaches the stopwatch mode he has an option
of either setting the time of the watch or directly going to the normal watch mode. If he
wants to set the time of the watch he can use evPressButtonS() which will toggle him to
the statechart of the WatchSetWatch class.
Chapter 4: Scenario Based Specification of Embedded Systems.
Structure Analysis and Structured Design and Object Oriented Analysis and Design are
the main approaches to high level design since many years. Structured models have been
based on functional decomposition and the flow of information and are depicted using
data flow diagrams. Later Dr. David Harel helped us with the state charts to add state
based behavior to these efforts of structured design. A state diagram or state chart is
associated with each function or activity describing the behavior. SA/SD functional
decomposition can also be given by programming languages like Esterel and Lustre. The
Object Oriented approach evolved for embedded systems with the advent of the
statecharts. Each class can be programmed using a statechart, which then serves to
describe the behavior of any instance object of that class. But here the issue of connecting
the structure and behavior is subtler and lot more complicated than in the SA / SD one.
Classes represent dynamically changing collections of concrete objects. Executable
Object modeling was possible only with the help of David Harel’s statechart language. It
served as the base for Rhapsody.
Requirements is the base of testing and debugging. These requirements can be informally
specified and use case describes high level behavioral requirements informally. A use
case is an informal description of a collection of possible scenarios involving the system
under discussion and its external actors. Use case describes the observable reactions of
the system to the events triggered by its users. Usually the description of use case is
divided into the main, most frequently used scenario, and exceptional scenarios that give
rise to the less central behaviors branching out from the main one. Since Use cases are
informal they cannot be used for formal testing and debugging. Here is where the MSCs
or the Message sequence Charts come into existence. They are used to specify the
scenarios as sequences of message interactions between object instances. This visual
language was adopted as a standard and is known in UML as the language of sequence
diagrams. MSCs thus visualizes the actual scenarios that the more abstract and generic
use cases were intended to denote.
Objects in MSCs were denoted by vertical lines, and messages between these instances
are denoted by horizontal arrows. Conditional guards, showing up elongated hexagons,
specify statements that are to be true when reached. The overall effect of specifying such
chart is to specify a scenario of behavior, consisting of messages flowing between objects
and things having to be true along the way.
The style of behavior captured by sequence charts is inter object, whereas the statecharts
are intra object. A sequence chart captures what is going on in a scenario of behavior that
takes place between and amongst objects, a statecharts captures full behavioral
specification for one of those objects. Statecharts thus provides the details of an objects
behavior under all possible conditions and in all the possible stories described previously
in the inter object sequence charts.
4.1 The need for using the Live Sequence Charts.
One vital point regarding the sequence charts is that the subtle difference in the roles of
the sequence based languages for behavior and component based ones is not made clear
in the literature. People come across articles in which the same phrases are used to
introduce sequence diagrams and statecharts. In few articles it would be mentioned that
“Sequence diagrams are used to specify behavioral requirements” and in a few it would
be specified that “Statecharts are used to specify behavioral requirements”. Sadly the
reader is told nothing about the fundamental difference in nature and usage between the
two i.e. one medium to specify the behavioral requirements. This obscurity is one of the
reasons many naïve readers come away confused by the multitude of diagram types in
the full UML standard and lack of clear recommendations about what it means to specify
the behavior of the system in a way that can be implemented and executed.
In 1988 Damm and Harel addressed many deficiencies relating the MSCs which resulted
in the extension of MSCs called the Live Sequence Charts (LSCs).The name comes from
its name to specify liveliness i.e. things that must occur. Technically, LSCs allow a
distinction between between possible and necessary behavior, both globally, on the level
on an entire chart, and locally, when specifying events, guarding conditions, and progress
overtime within a chart.
LSCs have 2 types of charts universal (enclosed within a solid borderline) and existential
(enclosed within a dashed borderline). Universal charts are more interesting ones, and are
used to specify scenario-based behavior that applies to all possible system runs. A
universal chart has 2 parts, turning it into an if-then-else kind of construct. It has a
prechart that specifies the scenario that, if satisfied, forces the system to also satisfy the
actual body chart, the main chart. Thus, such an LSC induces an action reaction kind of
relationship between scenario appearing in the chart body. Taken together, a collection of
LSCs provides a set of action-reaction relationship between scenario appearing in its
prechart and one appearing in the chart body. Taken together a collection of LSCs
provides a set of action reaction scenarios, and the universal ones must be satisfied at all
times during the system run.
Within the chart, the live elements, termed hot, signify things that must occur , and they
can be used to specify various modalities of behavior, including anti scenarios. The other
elements, termed cold, signify things that may occur, and they can be used to specify
control structures like branching and iteration.
The figure above shows the LSC for showing the sum of two numbers. It has a prechart
where 4 events are taking place and a main chart where one event takes place. This LSC
states that when the user clicks ‘+’ button then the value in the Display should be
assigned to N1 and this should be followed by a click on ‘=’ button. If this is a sequence
of events that follow then the new number in the display should be assigned to N2 and
the control is transferred to the main chart which states that if the events in the prechart
have completed then the sum of N1 and N2 has to be returned to the user via the display.
The messages in the main chart are hot ( if depicted by solid red arrows, in contrast to the
dashed blue ones which signify cold messages.). If the messages are hot the messages are
bound to be sent and the receiver is bound to receive them in order for the chart to get
satisfied.
Same lies for the conditions as well. If the conditions are cold then it will be denoted by
blue dashed border lined hexagon. If cold conditions are violated then the control moves
only from the innermost chart to up one level whereas if the conditions are hot then the
LSC is said to be violated and that we need to terminate the system, signifying an
unforgivable error. One way to specify an anti scenario is that we must state all the
occurrence of events which signify an anti scenario into the prechart area and state one
false hot condition in the main chart.
LSCs make it possible to understand the relationship between the inter object
requirements view and the intra object implementable model view.
If we have the structure of the system and the behavioral requirements modeled then
we can generate the implementable code. The intra object system model leads to the
final software or hardware, and will consist of complete behavior coded for each
object. In contrast, it is common to assume that the left hand side, the set of
requirements, is not implementable or executable. A collection of scenarios cannot be
considered an implementable model of the system. How would such a system
operate? What would it do under general dynamic circumstances? How would we
decide what scenarios would be relevant when some event suddenly occurs out of the
blue? How should we deal with the mandatory, the possible and the forbidden, during
execution? And how would we know what subsequent behaviors these and other
modalities of behavior might entail?
Now this assumptions are no longer valid. Scenario based behavior need not be
limited to requirements that will be specified before the real executable system is
built and will then be used merely to test that system. Scenario based behavior can
now actually get executed.
In the above figure we saw that the arrow between use cases and the requirements is
dashed for a reason: it does not represent a hard computerized process. Going form
use cases to formal requirements is a soft methodological process performed
manually by system designers and engineers. The arrow going from system model to
requirements depicts testing and verifying the model against the requirements. Say a
user has specified the requirements in the form of a sequence diagram called A. Later
when an executable intra object model is specified, user can execute it and the system
would construct an animated sequence diagram. Say an automated tool for testing
would actually function by comparing the output of this animated sequence diagram
and the actual sequence diagram. The system can be asked to compare both diagrams,
highlight any inconsistencies, such as contradictions in the partial order of events, or
events appearing in one such diagram but not in other. In this way the system helps us
debug the behavior of the system against the requirements. Note that against these
powerful ways to check the behavior of a system model against out expectations are
limited to those executions that we actually carry out. They thus suffer from same
drawbacks as classic testing and debugging. Since a system can have infinite number
of runs, some will always go unchecked, and it could be those that violate the
requirements. What we have in mind is a mathematically rigorous model and precise
proof that the model satisfies the requirements, and we want this to be done
automatically by a computerized verifier. Since we would like to use highly
expressive languages like the LSCs for requirements, this means far more than just
executing the system model and making sure that the sequence diagrams you get from
the run are consistent with those that we prepare in advance. It means making sure,
for example that the things an LSC says are not allowed to happen will indeed never
happen, and the things it says must happen will indeed happen. These are the facts
that no amount of execution can fully verify. Although general verification is a non
computable algorithmic problem, for finite state systems it is computationally
intractable.
The transition from requirements to the model does involve many methodologies in
there. Many system development methodologies provide guidelines, heuristics, and
sometimes carefully worked out stop by stop process for this. This is also a hard
process to get computerized. The duality between inter object scenario based style in
saying what a system does over time renders the synthesis of an implementable model
from the requirements. Synthesizing a good approximation of statecharts from the
LSCs has been a good research problem since long and researchers have been giving
out some kind of synthesis from temporal logic and timing diagrams. The technique
therein involves first determining whether the requirements are consistent, then
proving that being consistent and having a model are equivalent notations and then
using the proof of consistency to synthesize an actual model. This approach yields
large models in worst case so the problem cannot be yet said to be considered to have
been solved.
We need to bridge the gap between the use cases and the more formal languages used
to describe the different scenarios. We cannot hope to have a general technique for
synthesizing LSCs or temporal logic from the use cases automatically. The play in
technique addresses just these. This methodology is supported by a tool called the
Play Engine by Dr. David Harel. The main idea of the Play in process is to raise the
level of abstraction in requirements engineering, and to work with a look alike
version of system under development. This enables people who aren’t familiar with
LSCs or any such formal languages, to specify the behavioral requirements of a
system using the high level, intuitive and user friendly mechanism.
The ‘play in’ means is that the system developer first builds a GUI of the system,
with no behavior built into it, with only the basic methods supported by each GUI
object. This is given to the Play Engine. In systems for which there is a meaning to
the layout of hidden objects, the user may build the graphical representation of these
objects as well. In fact, for GUI less systems, or for sets of internal objects , we
simply use the object model diagram as a GUI. In any case, the user then ‘plays’ the
incoming events on the GUI, by clicking buttons, rotating knobs and sending
messages to hidden objects, in an intuitive drag and drop manner. By simply playing
the GUI, often uses right clicks, the user then defines the desired reactions of the
system and the conditions that may or must hold. As this is being done, the Play
engine does essentially two things continuously, it instructs the GUI to show its
current status using the graphical features built into it, and it constructs the
corresponding LSCs automatically. The engine queries the application GUI for its
structure and methods, and interacts with it, thus manipulating the information
entered by the user and building and exhibiting the appropriate formal behavior.
After playing in the behavior, the natural thing to do is to make sure that it reflects
what the user intended to say. Instead of doing this the conventional way, by building
an intra object model, or proto type implementation, and using the model execution to
test it, we would like to test the inter object behavior directly. Accordingly, the Play
Engine extends the power of the GUI intensive play methodology to make it possible
not only to specify and capture the required behavior but to test it and validate it as
well. And here is where the Play-out mechanism comes into being.
In play out approach, user simply plays the GUI application as he/she would have
don’t when executing a system model, or the final system, limiting himself or herself
to end user and external environment actions. As this is going on, the Play Engine
keeps track of the actions and causes other actions and events to occur as dictated by
the universal charts in the specification. Here too, the engine interacts with the GUI
application and uses it to reflect the system state at any given moment. This process
of user operating the GUI application and the Play Engine causing it to react
according to the specification has the effect of working with an executable model, but
with no intra object model having to be built or synthesized.
The above figure shows an enhanced development cycle, which includes the play in/
play out methodology inserted in the appropriate place. We see that the sceanarios
that we played out need not be merely the scenarios that were played in. The user is
not just tracing previously thought-out stories, but is operating the system freely, as
he/she sees it fit. The algorithmic mechanism underlying play out is non trivial,
especially when we extend LSCs with symbolic instances, time and forbidden
elements. The LSC specification along with the play in and play out approach, may
be considered to be not just systems requirements but also the final implementation.
Thus we can say that the Play Engine can be used not only for isolated parts of the
system development but for the entire cycle of development.
The above figure shows that parts of system development that can be eliminated if we
try the LSC approach, for certain kinds of systems.